Ultimately, I think this was a pedagogy decision that Yudkowsky had gotten right back in 'aught-eight. If you write your summary slogan in relativist language, people predictably take that as license to believe whatever they want without having to defend it. Whereas if you write your summary slogan in objectivist language—so that people know they don't have social permission to say that "it's subjective so I can't be wrong"—then you have some hope of sparking useful thought about the _exact, precise_ ways that _specific, definite_ things are _in fact_ relative to other specific, definite things.
-I told him I would send him one more email with a piece of evidence about how other "rationalists" were thinking about the categories issue, and give my commentary on the parable about orcs, and then the present thread would probably drop there.
+I told Scott I would send him one more email with a piece of evidence about how other "rationalists" were thinking about the categories issue, and give my commentary on the parable about orcs, and then the present thread would probably drop there.
On Discord in January, Kelsey Piper had told me that everyone else experienced their disagreement with me as being about where the joints are and which joints are important, where usability for humans was a legitimate criterion for importance, and it was annoying that I thought they didn't believe in carving reality at the joints at all and that categories should be whatever makes people happy.
(I did regret having accidentally "poisoned the well" the previous month by impulsively sharing the previous year's ["Blegg Mode"](/2018/Feb/blegg-mode/) [as a _Less Wrong_ linkpost](https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode). "Blegg Mode" had originally been drafted as part of "... To Make Predictions" before getting spun off as a separate post. Frustrated in March at our failing email campaign, I thought it was politically "clean" enough to belatedly share, but it proved to be insufficiently [deniably allegorical](/tag/deniably-allegorical/), as evidenced by the 60-plus-entry trainwreck of a comments section. It's plausible that some portion of the _Less Wrong_ audience would have been more receptive to "... Boundaries?" as not-politically-threatening philosophy, if they hadn't been alerted to the political context by the comments on the "Blegg Mode" linkpost.)
-On 13 April, I pulled the trigger on publishing "... Boundaries?", and wrote to Yudkowsky again, a fourth time (!), asking if he could _either_ publicly endorse the post, _or_ publicly comment on what he thought the post got right and what he thought it got wrong; and, that if engaging on this level was too expensive for him in terms of spoons, if there was any action I could take to somehow make it less expensive? The reason I thought this was important, I explained, was that if rationalists in [good standing](https://srconstantin.wordpress.com/2018/12/24/contrite-strategies-and-the-need-for-standards/) find themselves in a persistent disagreement _about rationality itself_—in this case, my disagreement with Scott Alexander and others about the cognitive function of categories—that seemed like a major concern for [our common interest](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes), something we should be very eager to _definitively settle in public_ (or at least _clarify_ the current state of the disagreement). In the absence of an established "rationality court of last resort", I feared the closest thing we had was an appeal to Eliezer Yudkowsky's personal judgement. Despite the context in which the dispute arose, _this wasn't a political issue_. The post I was asking for his comment on was _just_ about the [_mathematical laws_](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) governing how to talk about, _e.g._, dolphins. We had _nothing to be afraid of_ here. (Subject: "movement to clarity; or, rationality court filing").
+On 13 April, I pulled the trigger on publishing "... Boundaries?", and wrote to Yudkowsky again, a fourth time (!), asking if he could _either_ publicly endorse the post, _or_ publicly comment on what he thought the post got right and what he thought it got wrong; and, that if engaging on this level was too expensive for him in terms of spoons, if there was any action I could take to somehow make it less expensive? The reason I thought this was important, I explained, was that if rationalists in [good standing](https://srconstantin.github.io/2018/12/24/contrite-strategies-and-the-need-for-standards/) find themselves in a persistent disagreement _about rationality itself_—in this case, my disagreement with Scott Alexander and others about the cognitive function of categories—that seemed like a major concern for [our common interest](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes), something we should be very eager to _definitively settle in public_ (or at least _clarify_ the current state of the disagreement). In the absence of an established "rationality court of last resort", I feared the closest thing we had was an appeal to Eliezer Yudkowsky's personal judgement. Despite the context in which the dispute arose, _this wasn't a political issue_. The post I was asking for his comment on was _just_ about the [_mathematical laws_](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) governing how to talk about, _e.g._, dolphins. We had _nothing to be afraid of_ here. (Subject: "movement to clarity; or, rationality court filing").
I got some pushback from Ben and Jessica about claiming that this wasn't "political". What I meant by that was to emphasize (again) that I didn't expect Yudkowsky or "the community" to take a public stance _on gender politics_; I was trying to get "us" to take a stance in favor of the kind of _epistemology_ that we were doing in 2008. It turns out that epistemology has implications for gender politics which are unsafe, but that's _more inferential steps_, and ... I guess I just didn't expect the sort of people who would punish good epistemology to follow the inferential steps?
One might wonder why this was such a big deal to us. Okay, so Yudkowsky had prevaricated about his own philosophy of language for transparently political reasons, and couldn't be moved to clarify in public even after me and my posse spent an enormous amount of effort trying to explain the problem. So what? Aren't people wrong on the internet all the time?
-Ben explained that Yudkowsky wasn't a private person who might plausibly have the right to be wrong on the internet in peace. Yudkowsky was a public figure whose claim to legitimacy really did amount to a claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he almost uniquely was not—and he had he had set in motion a machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on that claim—"work for me or the world ends badly", basically.
+Ben explained: Yudkowsky had set in motion a marketing machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on the claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he, almost uniquely, was not. If the claim was _true_, it was important to make, and to actually extract that labor. "Work for me or the world ends badly," basically.
-If the claim was _true_, it was important to make, and to actually extract that labor. But we had falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built after we had _tried_ to bring it up in private with him, then we had a right to talk about what we thought was going on.
+But we had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built after we had _tried_ to talk to him privately, then we had a right to talk in public about what we thought was going on.
-Ben further compared Yudkowsky (as the most plausible individual representative of the "rationalists") to Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/): regardless of the initial intent, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue. Minds like mine wouldn't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust.
+This wasn't about direct benefit _vs._ harm. This was about what, substantively, the machine was doing. They claimed to be cultivating an epistemically rational community, while in fact building an army of loyalists.
-[TODO: rewrite Ben's account of the problem above, including 15 April Signal conversation]
+Ben compared the whole set-up to that of Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/): regardless of the _initial intent_, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue. Minds like mine wouldn't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust.
-------
-[TODO: better outline 2019]
+Given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing, and the maintainers of the marketing machine for the old thing.
+
+_I_ had been hyperfocused on prosecuting my Category War, but the reason Michael and Ben and Jessica were willing to help me out on that, was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a _more general_ problem of epistemic rot in "the community".
+
+Ben had previously written a lot about problems with Effective Altruism. Jessica had had a bad time at MIRI, as she had told me back in March, and would [later](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards). To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit?
+
+I believed that there _was_ a real problem, but didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better.
+
+Ben called it the Blight, after the rogue superintelligence in _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it's that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world.
+
+When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of MIRI executive director Nate Soares posting that he was "excited" about the launch of OpenAI, despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified to Ben that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified."
+
+This seemed to me like the sort of thing where a particularly principled (naive?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works, and things could not possibly be otherwise."
+
+I thought explaining the Blight to an ordinary grown-up was going to need _either_ lots of specific examples that were way more egregious than this (and more egregious than the examples in "EA Has a Lying Problem" or ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world (perhaps a private place) with _unusually high standards_.
+
+The schism introduced new pressures on my social life. On 20 April, I told Michael that I still wanted to be friends with people on both sides of the factional schism (in the frame where recent events were construed as a factional schism), even though I was on this side. Michael said that we should unambiguously regard Anna and Eliezer as criminals or enemy combatants (!!), and could claim no rights in regards to me or him.
+
+I don't think I "got" the framing at this time. War metaphors sounded Scary and Mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soliders on the other side of a war _aren't_ particularly morally blameworthy as individuals: their actions are just being controlled by the Power they're embedded in.
+
+I wrote to Anna:
+
+> I was _just_ trying to publicly settle a _very straightforward_ philosophy thing that seemed _really solid_ to me
+>
+> if, in the process, I accidentally ended up being an unusually useful pawn in Michael Vassar's deranged four-dimensional hyperchess political scheming
+>
+> that's ... _arguably_ not my fault
+
+[TODO: I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" personal-narrative Diary-like post, but I don't think I can finish it because it's too constrained: I don't know how to tell the story without (as I perceive it) escalating personal conflicts or leaking info from private conversations.]
+
+
+[TODO: math and wellness month http://zackmdavis.net/blog/2019/05/may-is-math-and-wellness-month/ /2019/May/hiatus/ ]
+
+[TODO: dayjob performance was awful]
+
+
+https://twitter.com/ESYudkowsky/status/1124751630937681922
+> ("sort of like" in the sense that, empirically, it made me feel much less personally aggrieved, but of course my feelings aren't the point)
+
+
+[TODO: epistemic defense meeting; the first morning where "rationalists ... them" felt more natural than "rationalists ... us"]
+
In November, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.
>
> I agree that pronouns don't have the same function as ordinary nouns. However, **in the English language as actually spoken by native speakers, I think that gender pronouns _do_ have effective "truth conditions" _as a matter of cognitive science_.** If someone said, "Come meet me and my friend at the mall; she's really cool and you'll like her", and then that friend turned out to look like me, **you would be surprised**.
>
-> I don't see the _substantive_ difference between "You're not standing in defense of truth [...]" and "I can define a word any way I want." [...]
+> I don't see the _substantive_ difference between "You're not standing in defense of truth (...)" and "I can define a word any way I want." [...]
>
> [...]
>
... hm, acutally, when I try to formalize this with the simplest possible toy model, it doesn't work (the "may as well be hung ..." effect doesn't happen given the modeling assumptions I just made up). I was going to say: our team chooses a self-censorship parameter c from 0 to 10, and faces a bullying level b from 0 to 10. b is actually b(c, p), a function of self-censorship and publicity p (also from 0 to 10). The team leaders' utility function is U(c, b) := -(c + b) (bullying and self-censorship are both bad). Suppose the bullying level is b := 10 - c + p (self-censorship decreases bullying, and publicity increases it).
My thought was: a disgruntled team-member might want to increase p in order to induce the leaders to choose a smaller value of c. But when I do the algebra, -(c + b) = -(c + (10 - c + p)) = -c - 10 + c - p = -10 - p. (Which doesn't depend on c, seemingly implying that more publicity is just bad for the leaders without changing their choice of c? But I should really be doing my dayjob now instead of figuring out if I made a mistake in this Facebook comment.)
-
-
-
> Eliezer is not a private person - he's a public figure. He set in motion a machine that continues to raise funds and demand work from people for below-market rates based on moral authority claims centered around his ability to be almost uniquely sane and therefore benevolent. (In some cases indirectly through his ability to cause others to be the same.) "Work for me or the world ends badly," basically.
> If this is TRUE (and also not a threat to destroy the world), then it's important to say, and to actually extract that work. But if not, then it's abuse! (Even if we want to be cautious about using emotionally loaded terms like that in public.)
> The machine he built to extract money, attention, and labor is still working, though, and claiming to be sane in part based on his prior advertisements, which it continues to promote. If Eliezer can't be bothered to withdraw his validation, then we get to talk about what we think is going on, clearly, in ways that aren't considerate of his feelings. He doesn't get to draw a boundary that prevents us from telling other people things about MIRI and him that we rationally and sincerely believe to be true.
> The fact that we magnanimously offered to settle this via private discussions with Eliezer doesn't give him an extra right to draw boundaries afterwards. We didn't agree to that. Attempting to settle doesn't forfeit the right to sue. Attempting to work out your differences with someone 1:1 doesn't forfeit your right to complain later if you were unable to arrive at a satisfactory deal (so long as you didn't pretend to do so).
+
+--------
+
+7 May—
+> I'm still pretty frustrated with the way you seem to dismiss my desire for a community that can get the basics right as delusional! Yes, I remember how you said in 2009 that you're not going to say the Things You Can't Say, and you keep repeating that you tried to tell me that public reason doesn't work, but that seems really fundamentally unresponsive to how I keep repeating that I only expect consensus on the basic philosophy stuff (not my object-level special interest). Why is it so unrealistic to imagine that the actually-smart people could enforce standards in our own tiny little bubble of the world? (Read the "worthless cowards" political/rhetorical move as an attempt to enforce the standard that the actually-smart people should enforce standards.)
+
+> I'm also still pretty angry about how your response to my "I believed our own propaganda" complaint is (my possibly-unfair paraphrase) "what you call 'propaganda' was all in your head; we were never actually going to do the unrestricted truthseeking thing when it was politically inconvenient." But ... no! I didn't just make up the propaganda! The hyperlinks still work! I didn't imagine them! They were real! You can still click on them: ["A Sense That More Is Possible"](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible), ["Raising the Sanity Waterline"](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline)
+
+> Can you please acknowledge that I didn't just make this up? Happy to pay you $200 for a reply to this email within the next 72 hours
+
+Or see ["A Fable of Science and Politics"](https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of-science-and-politics), where the editorial tone is pretty clear that we're supposed to be like Daria or Ferris, not Charles.
+
+> But, it's kind of bad that I'm thirty-one years old and haven't figured out how to be less emotionally needy/demanding; feeling a little bit less frame-locked now; let's talk in a few months (but offer in email-before-last is still open because rescinding it would be dishonorable)
+
+
+> I liked the conversation and like being on good terms and do not wish to be negative in what I am about to say. That said, if it's okay with you, I in general don't want to receive offers of money anymore (nor actual money either); I think I used to regard them as good-faith proposals for mutual benefit between consenting adults a la libertarianism, and at the moment I am afraid that if I ever accept money (or you ever give me money for an email or something) it'll appear in some Ben Hoffman essay as "using you" or some similar such thing, and receiving such offers (or taking them), given that I am not in fact in much need of money at the present margin, looks to me like not a good way to further my own goals as a consenting adult. (I appreciate the email you then sent afterward; I'd planned to write this before you sent that email, and it still seemed worth saying, but I appreciate also your own efforts to create a low-drama context.)
+
+> Re delusions, perhaps "ideals, which are useful but also partly intentionally simplified/inaccurate so as to be easier to unite around, and which for simplicity one coordinates as though others share" might be a better way to describe them, since sometimes simplified models have uses. Also, of course, the desire part can't be delusional; desires don't have truth-values; only predictions can be false-and-protecting-themselves-from-evidence. You can desire whatever you want, and can work toward it!
+
+> Regarding whether those ideals were ever the thing that a sensible but unbiased seeker-for-uniting-ideals, observing the 2008-2010 community, would've extrapolated from our community's speech and writing:
+
+> I agree such a person might've gotten your ideal from "a sense that more is possible", "raising the sanity waterline", and for that matter the facebook post about local validity semantics. Also from Michael Vassar's speech at the time about how he just went ahead and discussed e.g. race and gender and IQ and didn't seem to get ill effects from this and thought other people should do so too. too.
+> I think there are also pieces of speech one can point to from 2008-2010 that point toward it being unsurprising if people avoid controversial issues, e.g. "politics is the mind-killer" and its reception/quotation"; my own discussion of the essay "things you can't say" and my getting of Benton House to read this essay; various other things that others in good regard (not Eliezer) said to me privately at the time.
+> My own extrapolated ideal from all of this at the time was something like: "Ah, our ideal is 'let's try to figure out how to actually think, and practice it in concrete cases, while meanwhile dodging examples in public that risk getting us into trouble; practicing on less politics-inducing examples should work fine while developing the art, and is a net better idea than heading toward places where our nascent community may get torn apart by politics'".
+
+> (Looking at that last bullet point myself today, that claim seems far from obviously true, though also far from obviously false. It is therefore not a sentence I would say today (to avoid saying things that aren't likely enough to be true). It also seems to me today that we have done less to develop the core art of epistemic rationality than Eliezer hoped for in those posts, partly for lack of diligence and partly because the project itself proved harder and more likely to stir up hard-to-avoid psychological and social cans of instability than I initially expected. I guess my current guess is something like: "politics: can't rationality with it, can't rationality without it", although "it's hard to figure out how to" is maybe more accurate than simply "can't".)
+
+
+
+> When forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether or not I should cut my dick off would become a political issue. That's new evidence about whether the original vision was wise! I'm not trying to do politics with my idiosyncratic special interest; I'm trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" are going to commit a trivial epistemology mistake that interferes with my ability to think seriously about the most important thing in my life, but can't correct the mistake (because that would be politically inconvenient), then the 2019-era "rationalists" are worse than useless to me personally. This probably doesn't matter causally (I'm not an AI researcher, therefore I don't matter), but it might matter timelessly (if I'm part of a reference class that includes AI researchers).
+
+> Fundamentally, I just don't think you can do consisently high-grade reasoning as a group without committing heresy, because of the "Entangled Truths, Contagious Lies"/"Dark Side Epistemology" mechanism. You, Anna, in particular, are unusually good at thinking things without saying them; I think most people facing similar speech restrictions just get worse at thinking (plausibly including Eliezer), and the problem gets worse as the group effort scales. (It's easier to recommend "What You Can't Say" to your housemates than to put it on a canonical reading list, for obvious reasons.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding Occam's razor; you can't optimize for not questioning gender self-identity without also optimizing against understanding "A Human's Guide to Words."
+
+-----
+
+
+there's this thing where some people are way more productive than others and everyone knows it, but no one wants to make it common knowledge which is really awkward for the people who are simultaneously (a) known but not commonly-known to be underperforming (such that the culture of common-knowledge-prevention is to my self-interest because I get to collect the status and money rents of being a $150K/yr software engineer without actually performing at that level, and my coworkers and even managers don't want to call attention to it because that would be mean—and it helps that they know that I already feel guilty about it) but also (b) tempermentally unsuited and ideologically opposed to subsisting on unjustly-acquired rents rather than value creation
+
+(where I'm fond of the Ayn Rand æsthetic of "I earn my keep, and if the market were to decide that I don't deserve to live anymore, I guess it would be right and I should accept my fate with dignity" and I think the æsthetic is serving a useful function in my psychology even though it's also important to model how I would change my tune if the market actually decided that I don't deserve to live)
+
+> but the "Everyone knows that Zack feels guilty about underperforming, so they don't punish him, because he's already doing enough internalized-domination to punish himself" dynamic is unsustainable if it evolves (evolves is definitely the right word here) into a loop of "feeling gulit in exchange for not doing work" rather than the intended function of "feeling guilt in order to successfully incentivize work"
+
+You've got to be strong to survive in the [O-ring sector](https://en.wikipedia.org/wiki/O-ring_theory_of_economic_development)
+
+(I can actually see the multiplicative "tasks are intertwined and have to all happen at high reliability in order to create value" thing playing out in the form of "if I had fixed this bug earlier, then I would have less manual cleanup work", in contrast to the "making a bad latte with not enough foam, that doesn't ruin any of the other customer's lattes" from my Safeway-Starbucks-kiosk days)