+Maybe that's why I felt like I had to stand my ground and fight for the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions.
+
+Michael said that a reason to make a stand here in "the community" was that if we didn't, the beacon of "rationalism" would continue to lure and mislead others, but that more importantly, we needed to figure out how to win this kind of argument decisively, as a group; we couldn't afford to accept a _status quo_ of accepting defeat when faced with bad faith arguments _in general_. Ben reported writing to Scott to ask him to alter the beacon so that people like me wouldn't think "the community" was the place to go for literally doing the rationality thing anymore.
+
+As it happened, the next day, Wednesday, we saw these Tweets from @ESYudkowsky, linking to a _Quillette_ article interviewing Lisa Littman on her work on rapid onset gender dysphoria:
+
+> [Everything more complicated than](https://twitter.com/ESYudkowsky/status/1108277090577600512) protons tends to come in varieties. Hydrogen, for example, has isotopes. Gender dysphoria involves more than one proton and will probably have varieties. https://quillette.com/2019/03/19/an-interview-with-lisa-littman-who-coined-the-term-rapid-onset-gender-dysphoria/
+
+> [To be clear, I don't](https://twitter.com/ESYudkowsky/status/1108280619014905857) know much about gender dysphoria. There's an allegation that people are reluctant to speciate more than one kind of gender dysphoria. To the extent that's not a strawman, I would say only in a generic way that GD seems liable to have more than one species.
+
+(Why now? Maybe he saw the tag in my "tools have shattered" Tweet on Monday, or maybe the _Quillette_ article was just timely?)
+
+The most obvious reading of these Tweets was as a "concession" to my general political agenda. The two-type taxonomy of MtF was the thing I was _originally_ trying to talk about, back in 2016–2017, before getting derailed onto the present philosophy-of-language war, and here Yudkowsky was backing up "my side" on that by publicly offering an argument that there's probably a more-than-one-type typology.
+
+At this point, some readers might think that should have been the end of the matter, that I should have been satisfied. I had started the recent drama flare-up because Yudkowsky had Tweeted something unfavorable to my agenda. But now, Yudkowsky was Tweeting something _favorable_ to my agenda! Wouldn't it be greedy and ungrateful for me to keep criticizing him about the pronouns and language thing, given that he'd thrown me a bone here? Shouldn't I "call it even"?
+
+That's not how it works. The entire concept of there being "sides" to which one can make "concessions" is an artifact of human coalitional instincts; it's not something that _actually makes sense_ as a process for constructing a map that reflects the territory. My posse and I were trying to get a clarification about a philosophy-of-language claim Yudkowsky had made a few months prior ("you're not standing in defense of truth if [...]"), which I claimed was substantively misleading. Why would we stop prosecuting that, because of this _unrelated_ Tweet about the etiology of gender dysphoria? That wasn't the thing we were trying to clarify!
+
+Moreover—and I'm embarrassed that it took me another day to realize this—this new argument from Yudkowsky about the etiology of gender dysphoria was actually _wrong_. As I would later get around to explaining in ["On the Argumentative Form 'Super-Proton Things Tend to Come in Varieties'"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/), when people claim that some psychological or medical condition "comes in varieties", they're making a substantive _empirical_ claim that the [causal or statistical structure](/2021/Feb/you-are-right-and-i-was-wrong-reply-to-tailcalled-on-causality/) of the condition is usefully modeled as distinct clusters, not merely making the trivial observation that instances of the condition are not identical down to the subatomic level.
+
+As such, we _shouldn't_ think that there are probably multiple kinds of gender dysphoria _because things are made of protons_ (?!?). If anything, _a priori_ reasoning about the cognitive function of categorization should actually cut in the other direction, (mildly) _against_ rather than in favor of multi-type theories: you only want to add more categories to your theory [if they can pay for their additional complexity with better predictions](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length). If you believe in Blanchard–Bailey–Lawrence's two-type taxonomy of MtF, or Littman's proposed rapid-onset type, it should be on the _empirical_ merits, not because multi-type theories are especially more likely to be true.
+
+Had Yudkowsky been thinking that maybe if he Tweeted something favorable to my agenda, then me and the rest of Michael's gang would be satisfied and leave him alone?
+
+But ... if there's some _other_ reason you suspect there might be multiple species of dysphoria, but you _tell_ people your suspicion is because dysphoria has more than one proton, you're still misinforming people for political reasons, which was the _general_ problem we were trying to alert Yudkowsky to. (Someone who trusted you as a source of wisdom about rationality might try to apply your _fake_ "everything more complicated than protons tends to come in varieties" rationality lesson in some other context, and get the wrong answer.) Inventing fake rationality lessons in response to political pressure is _not okay_, and the fact that in this case the political pressure happened to be coming from _me_, didn't make it okay.
+
+I asked the posse if this analysis was worth sending to Yudkowsky. Michael said it wasn't worth the digression. He asked if I was comfortable generalizing from Scott's behavior, and what others had said about fear of speaking openly, to assuming that something similar was going on with Eliezer? If so, then now that we had common knowledge, we needed to confront the actual crisis, which was that dread was tearing apart old friendships and causing fanatics to betray everything that they ever stood for while its existence was still being denied.
+
+Another thing that happened that week was that former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). Significantly for political purposes, Jessica is trans. We didn't have to agree up front on all gender issues for her to see the epistemology problem with "... Not Man for the Categories", and to say that maintaining a narcissistic fantasy by controlling category boundaries wasn't what _she_ wanted, as a trans person. (On the seventeenth, when I lamented the state of a world that incentivized us to be political enemies, her response was, "Well, we could talk about it first.") Michael said that me and Jessica together had more moral authority than either of us alone.
+
+As it happened, I ran into Scott on the train that Friday, the twenty-second. He said that he wasn't sure why the oft-repeated moral of "A Human's Guide to Words" had been "You can't define a word any way you want" rather than "You _can_ define a word any way you want, but then you have to deal with the consequences."
+
+Ultimately, I think this was a pedagogy decision that Yudkowsky had gotten right back in 'aught-eight. If you write your summary slogan in relativist language, people predictably take that as license to believe whatever they want without having to defend it. Whereas if you write your summary slogan in objectivist language—so that people know they don't have social permission to say that "it's subjective so I can't be wrong"—then you have some hope of sparking useful thought about the _exact, precise_ ways that _specific, definite_ things are _in fact_ relative to other specific, definite things.
+
+I told Scott I would send him one more email with a piece of evidence about how other "rationalists" were thinking about the categories issue, and give my commentary on the parable about orcs, and then the present thread would probably drop there.
+
+On Discord in January, Kelsey Piper had told me that everyone else experienced their disagreement with me as being about where the joints are and which joints are important, where usability for humans was a legitimate criterion for importance, and it was annoying that I thought they didn't believe in carving reality at the joints at all and that categories should be whatever makes people happy.
+
+I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but ... Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey _thought_ she agreed with Scott, but actually didn't, that was kind of bad for our collective sanity, wasn't it?
+
+As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their _general_ ability to do arithmetic. We weren't not talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice!
+
+I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Eliezer would _have_ to understand even if Scott didn't. (Scott had claimed that he could use gerrymandered categories and still be just as good at making predictions—but that's just not true if we're talking about the _internal_ use of categories as a [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms), rather than mere verbal behavior: it's always easy to _say_ "_X_ is a _Y_" for arbitrary _X_ and _Y_ if the stakes demand it, but if you're _actually_ using that concept of _Y_ internally, that does have effects on your world-model.)
+
+But after consultation with the posse, I concluded that further email prosecution was not useful at this time; the philosophy argument would work better as a public _Less Wrong_ post. So my revised Category War to-do list was:
+
+ * Send the brief wrapping-up/end-of-conversation email to Scott (with the Discord anecdote with Kelsey and commentary on the orc story).
+ * Mentally write-off Scott, Eliezer, and the so-called "rationalist" community as a loss so that I wouldn't be in horrible emotional pain from cognitive dissonance all the time.
+ * Write up the mathy version of the categories argument for _Less Wrong_ (which I thought might take a few months—I had a dayjob, and write slowly, and might need to learn some new math, which I'm also slow at).
+ * _Then_ email the link to Scott and Eliezer asking for a signal-boost and/or court ruling.
+
+Ben didn't think the mathematically precise categories argument was the most important thing for _Less Wrong_ readers to know about: a similarly careful explanation of why I've written off Scott, Eliezer, and the "rationalists" would be way more valuable.
+
+I could see the value he was pointing at, but something in me balked at the idea of _attacking my friends in public_ (Subject: "treachery, faith, and the great river (was: Re: DRAFTS: 'wrapping up; or, Orc-ham's razor' and 'on the power and efficacy of categories')").
+
+Ben had previously written (in the context of the effective altruism movement) about how [holding criticism to a higher standard than praise distorts our collective map](http://benjaminrosshoffman.com/honesty-and-perjury/#A_tax_on_criticism).
+
+He was obviously correct that this was a distortionary force relative to what ideal Bayesian agents would do, but I was worried that when we're talking about criticism of _people_ rather than ideas, the removal of the distortionary force would just result in an ugly war (and not more truth). Criticism of institutions and social systems _should_ be filed under "ideas" rather than "people", but the smaller-scale you get, the harder this distinction is to maintain: criticizing, say, "the Center for Effective Altruism", somehow feels more like criticizing Will MacAskill personally than criticizing "the United States" does, even though neither CEA nor the U.S. is a person.
+
+This is why I felt like I couldn't give up faith that [honest discourse _eventually_ wins](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/). Under my current strategy and consensus social norms, I could criticize Scott or Kelsey or Ozy's _ideas_ without my social life dissolving into a war of all against all, whereas if I were to give in to the temptation to flip a table and say, "Okay, now I _know_ you guys are just messing with me," then I didn't see how that led anywhere good, even if they really _were_ just messing with me.
+
+Jessica explained what she saw as the problem with this. What Ben was proposing was _creating clarity about behavioral patterns_. I was saying that I was afraid that creating such clarity is an attack on someone. But if so, then my blog was an attack on trans people. What was going on here?
+
+Socially, creating clarity about behavioral patterns _is_ construed as an attack and _can_ make things worse for someone: for example, if your livelihood is based on telling a story about you and your flunkies being the only sane truthseeking people in the world, then me demonstrating that you don't care about the truth when it's politically inconvenient for you is a threat to your marketing story and therefore a threat to your livelihood. As a result, it's easier to create clarity down power gradients than up power gradients: it was easy for me to blow the whistle on trans people's narcissistic delusions, but hard to blow the whistle on Yudkowsky's narcissistic delusions.
+
+But _selectively_ creating clarity down but not up power gradients just reinforces existing power relations—just like how selectively criticizing arguments with politically unfavorable conclusions only reinforces your current political beliefs. I shouldn't be able to get away with claiming that [calling non-exclusively-androphilic trans women delusional perverts](/2017/Mar/smart/) is okay on the grounds that that which can be destroyed by the truth should be, but that calling out Alexander and Yudkowsky would be unjustified on the grounds of starting a war or whatever. If I was being cowardly or otherwise unprincipled, I should own that instead of generating spurious justifications. Jessica was on board with a project to tear down narcissistic fantasies in general, but not on board with a project that starts by tearing down trans people's narcissistic fantasies, but then emits spurious excuses for not following that effort where it leads.
+
+Somewhat apologetically, I replied that the distinction between truthfully, publicly criticizing group identities and _named individuals_ still seemed very significant to me?—and that avoiding leaking info from private conversations seemed like an important obligation, too. I would be way more comfortable writing [a scathing blog post about the behavior of "rationalists"](/2017/Jan/im-sick-of-being-lied-to/), than about a specific person not adhering to good discourse norms in an email conversation that they had good reason to expect to be private. I thought I was consistent about this: contrast my writing to the way that some anti-trans writers name-and-shame particular individuals. (The closest I had come was [mentioning Danielle Muscato as someone who doesn't pass](/2018/Dec/untitled-metablogging-26-december-2018/#photo-of-danielle-muscato)—and even there, I admitted it was "unclassy" and done in desperation of other ways to make the point having failed.) I had to acknowledge that criticism of non-exclusively-androphilic trans women in general _implied_ criticism of Jessica, and criticism of "rationalists" in general _implied_ criticism of Yudkowsky and Alexander and me, but the extra inferential step and "fog of probability" seemed useful for making the speech act less of an attack? Was I wrong?
+
+<a id="less-precise-is-more-violent"></a>Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that as a more serious threat to his safety. (And rationalists and trans women are exactly the sort of people that get targeted by the same people who target Jews.)
+
+-----
+
+Polishing the advanced categories argument from earlier email drafts into a solid _Less Wrong_ post didn't take that long: by 6 April 2019, I had an almost-complete draft of the new post, ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), that I was pretty happy with.
+
+The title (note: "boundaries", plural) was a play off of ["Where to the Draw the Boundary?"](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) (note: "boundary", singular), a post from Yudkowsky's [original Sequence](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb) on the [37 ways in which words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). In "... Boundary?", Yudkowsky asserts (without argument, as something that all educated people already know) that dolphins don't form a natural category with fish ("Once upon a time it was thought that the word 'fish' included dolphins [...] Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list"). But Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) directly contradicts this, asserting that there's nothing wrong with with biblical Hebrew word _dagim_ encompassing both fish and cetaceans (dolphins and whales). So who's right, Yudkowsky (2008) or Alexander (2014)? Is there a problem with dolphins being "fish", or not?
+
+In "... Boundaries?", I unify the two positions and explain how both Yudkowsky and Alexander have a point: in high-dimensional configuration space, there's a cluster of finned water-dwelling animals in the subspace of the dimensions along which finned water-dwelling animals are similar to each other, and a cluster of mammals in the subspace of the dimensions along which mammals are similar to each other, and dolphins belong to _both_ of them. _Which_ subspace you pay attention to can legitimately depend on your values: if you don't care about predicting or controlling some particular variable, you have no reason to look for clusters along that dimension.
+
+But _given_ a subspace of interest, the _technical_ criterion of drawing category boundaries around [regions of high density in configuration space](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) still applies. There is Law governing which uses of communication signals transmit which information, and the Law can't be brushed off with, "whatever, it's a pragmatic choice, just be nice." I demonstrate the Law with a couple of simple mathematical examples: if you redefine a codeword that originally pointed to one cluster in ℝ³, to also include another, that changes the quantitative predictions you make about an unobserved coordinate given the codeword; if an employer starts giving the title "Vice President" to line workers, that decreases the [mutual information](https://en.wikipedia.org/wiki/Mutual_information) between the job title and properties of the job.
+
+(Jessica and Ben's [discussion of the job title example in relation to the _Wikipedia_ summary of Jean Baudrillard's _Simulacra and Simulation_ got published separately](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/), and ended up taking on a life of its own [in](http://benjaminrosshoffman.com/blame-games/) [future](http://benjaminrosshoffman.com/blatant-lies-best-kind/) [posts](http://benjaminrosshoffman.com/simulacra-subjectivity/), [including](https://www.lesswrong.com/posts/Z5wF8mdonsM2AuGgt/negative-feedback-and-simulacra) [a](https://www.lesswrong.com/posts/NiTW5uNtXTwBsFkd4/signalling-and-simulacra-level-3) [number](https://www.lesswrong.com/posts/tF8z9HBoBn783Cirz/simulacrum-3-as-stag-hunt-strategy) [of](https://www.lesswrong.com/tag/simulacrum-levels) [posts](https://thezvi.wordpress.com/2020/05/03/on-negative-feedback-and-simulacra/) [by](https://thezvi.wordpress.com/2020/06/15/simulacra-and-covid-19/) [other](https://thezvi.wordpress.com/2020/08/03/unifying-the-simulacra-definitions/) [authors](https://thezvi.wordpress.com/2020/09/07/the-four-children-of-the-seder-as-the-simulacra-levels/).)
+
+Sarah asked if the math wasn't a bit overkill: were the calculations really necessary to make the basic point that good definitions should be about classifying the world, rather than about what's pleasant or politically expedient to say? I thought the math was _really important_ as an appeal to principle—and [as intimidation](https://slatestarcodex.com/2014/08/10/getting-eulered/). (As it was written, [_the tenth virtue is precision!_](http://yudkowsky.net/rational/virtues/) Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.)
+
+"... Boundaries?" explains all this in the form of discourse with a hypothetical interlocutor arguing for the I-can-define-a-word-any-way-I-want position. In the hypothetical interlocutor's parts, I wove in verbatim quotes (without attribution) from Alexander ("an alternative categorization system is not an error, and borders are not objectively true or false") and Yudkowsky ("You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", "Using language in a way _you_ dislike is not lying. The propositions you claim false [...] is not what the [...] is meant to convey, and this is known to everyone involved; it is not a secret"), and Bensinger ("doesn't unambiguously refer to the thing you're trying to point at").
+
+My thinking here was that the posse's previous email campaigns had been doomed to failure by being too closely linked to the politically-contentious object-level topic which reputable people had strong incentives not to touch with a ten-foot pole. So if I wrote this post _just_ explaining what was wrong with the claims Yudkowsky and Alexander had made about the philosophy of language, with perfectly innocent examples about dolphins and job titles, that would remove the political barrier and [leave a line of retreat](https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat) for Yudkowsky to correct the philosophy of language error. And then if someone with a threatening social-justicey aura were to say, "Wait, doesn't this contradict what you said about trans people earlier?", the reputable people could stonewall them. (Stonewall _them_ and not _me_!)
+
+Another reason someone might be reluctant to correct mistakes when pointed out, is the fear that such a policy could be abused by motivated nitpickers. It would be pretty annoying to be obligated to churn out an endless stream of trivial corrections by someone motivated to comb through your entire portfolio and point out every little thing you did imperfectly, ever.
+
+I wondered if maybe, in Scott or Eliezer's mental universe, I was a blameworthy (or pitiably mentally ill) nitpicker for flipping out over a blog post from 2014 (!) and some Tweets (!!) from November. Like, really? I, too, had probably said things that were wrong _five years ago_.
+
+But, well, I thought I had made a pretty convincing case that a lot of people were making a correctable and important rationality mistake, such that the cost of a correction (about the philosophy of language specifically, not any possible implications for gender politics) would actually be justified here. As Ben pointed out, if someone had put _this much_ effort into pointing out an error _I_ had made four months or five years ago and making careful arguments for why it was important to get the right answer, I probably _would_ put some serious thought into it.
+
+I could see a case that it was unfair of me to include political subtext and then only expect people to engage with the politically-clean text, but if we weren't going to get into full-on gender-politics on _Less Wrong_ (which seemed like a bad idea), but gender politics _was_ motivating an epistemology error, I wasn't sure what else I was supposed to do! I was pretty constrained here!
+
+(I did regret having accidentally "poisoned the well" the previous month by impulsively sharing the previous year's ["Blegg Mode"](/2018/Feb/blegg-mode/) [as a _Less Wrong_ linkpost](https://www.lesswrong.com/posts/GEJzPwY8JedcNX2qz/blegg-mode). "Blegg Mode" had originally been drafted as part of "... To Make Predictions" before getting spun off as a separate post. Frustrated in March at our failing email campaign, I thought it was politically "clean" enough to belatedly share, but it proved to be insufficiently [deniably allegorical](/tag/deniably-allegorical/), as evidenced by the 60-plus-entry trainwreck of a comments section. It's plausible that some portion of the _Less Wrong_ audience would have been more receptive to "... Boundaries?" as not-politically-threatening philosophy, if they hadn't been alerted to the political context by the comments on the "Blegg Mode" linkpost.)
+
+On 13 April 2019, I pulled the trigger on publishing "... Boundaries?", and wrote to Yudkowsky again, a fourth time (!), asking if he could _either_ publicly endorse the post, _or_ publicly comment on what he thought the post got right and what he thought it got wrong; and, that if engaging on this level was too expensive for him in terms of spoons, if there was any action I could take to somehow make it less expensive? The reason I thought this was important, I explained, was that if rationalists in [good standing](https://srconstantin.github.io/2018/12/24/contrite-strategies-and-the-need-for-standards/) find themselves in a persistent disagreement _about rationality itself_—in this case, my disagreement with Scott Alexander and others about the cognitive function of categories—that seemed like a major concern for [our common interest](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes), something we should be very eager to _definitively settle in public_ (or at least _clarify_ the current state of the disagreement). In the absence of an established "rationality court of last resort", I feared the closest thing we had was an appeal to Eliezer Yudkowsky's personal judgement. Despite the context in which the dispute arose, _this wasn't a political issue_. The post I was asking for his comment on was _just_ about the [_mathematical laws_](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) governing how to talk about, _e.g._, dolphins. We had _nothing to be afraid of_ here. (Subject: "movement to clarity; or, rationality court filing").
+
+I got some pushback from Ben and Jessica about claiming that this wasn't "political". What I meant by that was to emphasize (again) that I didn't expect Yudkowsky or "the community" to take a public stance _on gender politics_; I was trying to get "us" to take a stance in favor of the kind of _epistemology_ that we were doing in 2008. It turns out that epistemology has implications for gender politics which are unsafe, but that's _more inferential steps_, and ... I guess I just didn't expect the sort of people who would punish good epistemology to follow the inferential steps?
+
+Anyway, again without revealing any content from the other side of any private conversations that may or may not have occurred, we did not get any public engagement from Yudkowsky.
+
+It seemed that the Category War was over, and we lost.
+
+We _lost?!_ How could we _lose?!_ The philosophy here was _very clear-cut_. This _shouldn't_ be hard or expensive or difficult to clear up. I could believe that Alexander was "honestly" confused, but Yudkowsky ...!?
+
+I could see how, under ordinary circumstances, asking Yudkowsky to weigh in on my post would be inappropriately demanding of a Very Important Person's time, given that an ordinary programmer such as me was surely as a mere _worm_ in the presence of the great Eliezer Yudkowsky. (I would have humbly given up much sooner if I hadn't gotten social proof from Michael and Ben and Sarah and secret posse member and Jessica.)
+
+But the only reason for my post to exist was because it would be even _more_ inappropriately demanding to ask for a clarification in the original gender-political context. The game theorist Thomas Schelling once wrote about the use of clever excuses to help one's negotiating counterparty release themselves from a prior commitment: "One must seek [...] a rationalization by which to deny oneself too great a reward from the opponent's concession, otherwise the concession will not be made."[^schelling] This is sort of what I was trying to do when soliciting—begging for—engagement-or-endorsement of "... Boundaries?" By making the post be about dolphins, I was trying to deny myself too great of a reward _on the gender-politics front_. I _don't_ think it was inappropriately demanding to expect "us" (him) to _be correct about the cognitive function of categorization_. (If not, why pretend to have a "rationality community" at all?) I was _trying_ to be as accomodating as I could, short of just letting him (us?) be wrong.
+
+[^schelling]: _Strategy of Conflict_, Ch. 2, "An Essay on Bargaining"
+
+Maybe that's not how politics works? Could it be that, somehow, the mob-punishment mechanisms that weren't smart enough to understand the concept of "bad argument (categories are arbitrary) for a true conclusion (trans people are OK)", _were_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs didn't think they could endorse my _correct_ philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies?
+
+Jessica mentioned talking with someone about me writing to Yudkowsky and Alexander requesting that they clarify the category boundary thing. This person described having a sense that I should have known that wouldn't work—because of the politics involved, not because I wasn't right. I thought Jessica's takeaway was very poignant:
+
+> Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused.
+
+_Should_ I have known that it wouldn't work? _Didn't_ I "already know", at some level?
+
+I guess in retrospect, the outcome does seem kind of "obvious"—that it should have been possible to predict in advance, and to make the corresponding update without so much fuss and wasting so many people's time.
+
+But ... it's only "obvious" if you _take as a given_ that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year.[^any-other-public-intellectual]
+
+[^any-other-public-intellectual]: And really, that's the _charitable_ interpretation. The extent to which I still have trouble entertaining the idea that Yudkowsky _acutally_ drunk the gender ideology Kool-Aid, rather than merely having pretended to, is a testament to the thoroughness of my indoctrination.
+
+Maybe this seems banal if you haven't spent your entire adult life in his robot cult? Coming from _anyone else in the world_, I wouldn't have had a problem with the "hill of validity in defense of meaning" thread—I have respected it a solidly above-average philosophy performance, before [setting the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to) on the author and getting on with my day. But since I _did_ spend my entire adult life in Yudkowsky's robot cult, trusting him the way a Catholic trusts the Pope, I _had_ to assume that it was an "honest mistake" in his rationality lessons, and that honest mistakes could be honestly corrected if someone put in the effort to explain the problem. The idea that Eliezer Yudkowsky was going to behave just as badly as any other public intellectual in the current year, was not really in my hypothesis space. It took some _very large_ likelihood ratios to beat it into my head the thing that was obviously happenening, was actually happening.
+
+Ben shared the account of our posse's email campaign with someone, who commented that I had "sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys." That is, if I had been brave enough to confront Yudkowsky by myself, _maybe_ there was some hope of him seeing that the game he was playing was wrong. But because I was so cowardly as to need social proof (because I believed that an ordinary programmer such as me was as a mere worm in the presence of the great Eliezer Yudkowsky), it must have just looked to him like an illegible social plot originating from Michael.
+
+One might wonder why this was such a big deal to us. Okay, so Yudkowsky had prevaricated about his own philosophy of language for transparently political reasons, and couldn't be moved to clarify in public even after me and my posse spent an enormous amount of effort trying to explain the problem. So what? Aren't people wrong on the internet all the time?
+
+Ben explained: Yudkowsky had set in motion a marketing machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on the claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he, almost uniquely, was not. "Work for me or the world ends badly," basically. If the claim was _true_, it was important to make, and to actually extract that labor.
+
+But we had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a _extremely high_ standard, and not a special flaw of Yudkowsky in the current environment). If Yudkowsky couldn't be bothered to live up to his own stated standards or withdraw his validation from the machine he built after we had _tried_ to talk to him privately, then we had a right to talk in public about what we thought was going on.
+
+This wasn't about direct benefit _vs._ harm. This was about what, substantively, the machine and its operators were doing. They claimed to be cultivating an epistemically rational community, while in fact building an army of loyalists.
+
+Ben compared the whole set-up to that of Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/): regardless of the _initial intent_, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue. Minds like mine wouldn't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust.
+
+-------
+
+Given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing, and the operators of the marketing machine for the old thing.
+
+_I_ had been hyperfocused on prosecuting my Category War, but the reason Michael and Ben and Jessica were willing to help me out on that, was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a _more general_ problem of epistemic rot in "the community".
+
+Ben had [previously](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [written](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) a lot [about](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) [problems](http://benjaminrosshoffman.com/against-responsibility/) [with](http://benjaminrosshoffman.com/against-neglectedness/) Effective Altruism. Jessica had had a bad time at MIRI, as she had told me back in March, and would [later](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards). To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit?
+
+I believed that there _was_ a real problem, but didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!"—any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt), rather than having the kind of robust, precise representation a well-designed AI could compute plans with.
+
+Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it's that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world—using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game.
+
+When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified to Ben that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified."
+
+This seemed to me like the sort of thing where a particularly principled (naïve?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works, and things could not possibly be otherwise."
+
+I thought explaining the Blight to an ordinary grown-up was going to need _either_ lots of specific examples that were way more egregious than this (and more egregious than the examples in ["EA Has a Lying Problem"](https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html) or ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world (perhaps a private place) with _unusually high standards_.
+
+The schism introduced new pressures on my social life. On 20 April 2019, I told Michael that I still wanted to be friends with people on both sides of the factional schism (in the frame where recent events were construed as a factional schism), even though I was on this side. Michael said that we should unambiguously regard Anna and Eliezer as criminals or enemy combatants (!!), that could claim no rights in regards to me or him.
+
+I don't think I "got" the framing at this time. War metaphors sounded Scary and Mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soliders on the other side of a war _aren't_ particularly morally blameworthy as individuals:[^soldiers] their actions are being directed by the Power they're embedded in.
+
+[^soldiers]: At least, not blameworthy _in the same way_ as someone who committed the same violence as an individual.
+
+I wrote to Anna:
+
+> To: Anna Salamon <[redacted]>
+> Date: 20 April 2019 11:08 _p.m._
+> Subject: Re: the end of the Category War (we lost?!?!?!)
+>
+> I was _just_ trying to publicly settle a _very straightforward_ philosophy thing that seemed _really solid_ to me
+>
+> if, in the process, I accidentally ended up being an unusually useful pawn in Michael Vassar's deranged four-dimensional hyperchess political scheming
+>
+> that's ... _arguably_ not my fault
+
+-----
+
+I may have subconsciously pulled off an interesting political thing. In my final email to Yudkowsky on 20 April 2019 (Subject: "closing thoughts from me"), I had written—
+
+> If we can't even get a public consensus from our _de facto_ leadership on something _so basic_ as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's _no point in pretending to have a rationalist community_, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
+
+And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feel a lot less personally aggrieved.) Was I wrong to interpet this as another "concession" to me? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)
+
+Separately, I visited some friends' house on 30 April 2019 saying, essentially (and sincerely), "[Oh man oh jeez](https://www.youtube.com/watch?v=NivwAQ8sUYQ), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_." The ensuing group conversation made some progress, but was mostly pretty horrifying.
+
+In an adorable twist, my friends' two-year-old son was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael _Vassar_.
+
+And as it happened, on 7 May 2019, Kelsey wrote [a Facebook comment displaying evidence of understanding my point](https://www.facebook.com/julia.galef/posts/pfbid0QjdD8kWAZJMiczeLdMioqmPkRhewcmGtQpXRBu2ruXq8SkKvw5yvvSH2cWVDghWRl?comment_id=10104430041947222&reply_comment_id=10104430059182682).
+
+These two datapoints led me to a psychological hypothesis (which was maybe "obvious", but I hadn't thought about it before): when people see someone wavering between their coalition and a rival coalition, they're motivated to offer a few concessions to keep the wavering person on their side. Kelsey could _afford_ (_pace_ [Upton Sinclair](https://www.goodreads.com/quotes/21810-it-is-difficult-to-get-a-man-to-understand-something)) to not understand the thing about sex being a natural category ("I don't think 'people who'd get surgery to have the ideal female body' cuts anything at the joints"!!) when it was just me freaking out alone, but "got it" almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes) ... and maybe my "closing thoughts" email had a similar effect on Yudkowsky (assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later)?? This probably wouldn't work if you repeated it (or tried to do it consciously)?
+
+----
+
+I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" memoir-post. Ben said that the target audience to aim for was people like I was a few years ago, who hadn't yet had the experiences I had—so they wouldn't have to freak out to the point of being imprisoned and demand help from community leaders and not get it; they could just learn from me. That is, the actual sympathetic-but-naïve people could learn. Not the people messing with me.
+
+I didn't know how to continue it. I was too psychologically constrained; I didn't know how to tell the Whole Dumb Story without (as I perceived it) escalating personal conflicts or leaking info from private conversations.
+
+I decided to take a break from the religious civil war [and from this blog](/2019/May/hiatus/), and [declared May 2019 as Math and Wellness Month](http://zackmdavis.net/blog/2019/05/may-is-math-and-wellness-month/).
+
+My dayjob performance had been suffering terribly for months. The psychology of the workplace is ... subtle. There's a phenomenon where some people are _way_ more productive than others and everyone knows it, but no one is cruel enough [to make it _common_ knowledge](https://slatestarcodex.com/2015/10/15/it-was-you-who-made-my-blue-eyes-blue/), which is awkward for people who simultaneously benefit from the culture of common-knowledge-prevention allowing them to collect the status and money rents of being a $150K/yr software engineer without actually [performing at that level](http://zackmdavis.net/blog/2013/12/fortune/), while also having [read enough Ayn Rand as a teenager](/2017/Sep/neither-as-plea-nor-as-despair/) to be ideologically opposed to subsisting on unjustly-acquired rents rather than value creation. The "everyone knows I feel guilty about underperforming, so they don't punish me because I'm already doing enough internalized domination to punish myself" dynamic would be unsustainable if it were to evolve into a loop of "feeling gulit _in exchange for_ not doing work" rather than the intended "feeling guilt in order to successfully incentivize work". I didn't think they would actually fire me, but I was worried that they _should_. I asked my boss to temporarily take on some easier tasks, that I could make steady progress on even while being psychologically impaired from a religious war. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired _anyway_, it was better to be upfront about how I could best serve the company given that impairment, rather than hoping that the boss wouldn't notice.
+
+My "intent" to take a break from the religious war didn't take. I met with Anna on the UC Berkeley campus, and read her excerpts from some of Ben's and Jessica's emails. (She had not acquiesced to my request for a comment on "... Boundaries?", including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; I had figured that spamming people with hysterical and somewhat demanding physical postcards was more polite (and funnier) than my usual habit of spamming people with hysterical and somewhat demanding emails.) While we (my posse) were aghast at Yudkowsky's behavior, she was aghast at ours: reaching out to try to have a conversation with Yudkowsky, and then concluding he was a fraud because we weren't satisfied with the outcome was like hiding soldiers in an ambulance, introducing a threat against Yudkowsky in context where he had a right to be safe.
+
+I complained that I had _actually believed_ our own marketing material about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about what "the thing" was, was never actually part of the original vision. She kept repeating that she had _tried_ to warn me in previous years that public reason didn't work, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture, and then don't say them.)
+
+It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed really fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the actually-smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies-and-the-need-for-standards/) in our own tiny little bubble of the world?
+
+My frustration bubbled out into follow-up emails:
+
+> To: Anna Salamon <[redacted]>
+> Date: 7 May 2019 12:53 _p.m._
+> Subject: Re: works cited
+>
+> I'm also still pretty _angry_ about how your response to my "I believed our own propaganda" complaint is (my possibly-unfair paraphrase) "what you call 'propaganda' was all in your head; we were never _actually_ going to do the unrestricted truthseeking thing when it was politically inconvenient." But ... no! **I _didn't_ just make up the propaganda! The hyperlinks still work! I didn't imagine them! They were real! You can still click on them:** ["A Sense That More Is Possible"](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible), ["Raising the Sanity Waterline"](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline)
+>
+> Can you please _acknowledge that I didn't just make this up?_ Happy to pay you $200 for a reply to this email within the next 72 hours
+
+<p></p>
+
+> To: Anna Salamon <[redacted]>
+> Date: 7 May 2019 3:35 _p.m._
+> Subject: Re: works cited
+>
+> Or see ["A Fable of Science and Politics"](https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of-science-and-politics), where the editorial tone is pretty clear that we're supposed to be like Daria or Ferris, not Charles.
+
+(This being a parable about an underground Society polarized into factions with different beliefs about the color of the unseen sky, and how different types of people react to the discovery of a passageway to the overworld which reveals that the sky is blue. Daria (formerly of the Green faction) steels herself to accept the unpleasant truth. Ferris reacts with delighted curiosity. Charles, thinking only of preserving the existing social order and unconcerned with what the naïve would call "facts", _blocks off the passageway_.)
+
+> To: Anna Salamon <[redacted]>
+> Date: 7 May 2019 8:26 _p.m._
+> Subject: Re: works cited
+>
+> But, it's kind of bad that I'm thirty-one years old and haven't figured out how to be less emotionally needy/demanding; feeling a little bit less frame-locked now; let's talk in a few months (but offer in email-before-last is still open because rescinding it would be dishonorable)
+
+Anna said she didn't want to receive monetary offers from me anymore; previously, she had regarded my custom of throwing money at people to get what I wanted as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of "A Sense That More Is Possible", "Raising the Sanity Waterline", _&c._, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)), that it shouldn't be surprising if people steered clear of controversy.
+
+I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that _whether or not I should cut my dick off_ would _become_ a political issue. That was _new evidence_ about whether the original vision was wise! I wasn't trying to do politics with my idiosyncratic special interest; I was trying to _think seriously_ about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake, then the 2019-era "rationalists" were _worse than useless_ to me personally. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of a reference class that includes AI researchers).
+
+Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's easier to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on a canonical reading list, for obvious reasons.) You _can't_ optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you _can't_ optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
+
+[^plausibly]: Today I would say _obviously_, but at this point, I was still deep enough in my hero-worship that I wrote "plausibly".
+
+Despite Math and Wellness Month and my "intent" to take a break from the religious civil war, I kept reading _Less Wrong_ during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness).
+
+MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if there was some chance that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and commented about it: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No).
+
+[TODO: explain scuffle on "Yes Requires the Possibility"—
+
+ * Vanessa comment on hobbyhorses and feeling attacked
+ * my reply about philosophy got politicized, and MDL/atheism analogy
+ * Ben vs. Said on political speech and meta-attacks; Goldenberg on feelings
+ * 139-comment trainwreck got so bad, the mods manually moved the comments into their own thread https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
+ * based on the karma scores and what was said, this went pretty well for me and I count it as a victory
+
+]
+
+On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite _almost literally_ any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: _either_ Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, _or_ one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The mod [was persuaded on reflection](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."
+
+[TODO:
+"victories" weren't comforting when I resented this becoming a political slapfight at all—a lot of the objections in the Vanessa thread were utterly insane
+I wrote to Anna and Steven Kaas (who I was trying to "recruit" onto our side of the civil war) ]
+
+In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software."
+
+[TODO: I don't know if you caught the shitshow on Less Wrong, but isn't it terrifying that the person who objected was a goddamned _MIRI research associate_ ... not to demonize Vanessa because I was just as bad (if not worse) in 2008 (/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard#hair-trigger-antisexism), but in 2008 we had a culture that could _beat it out of me_]
+
+[TODO: Steven's objection:
+> the Earth's gravitational field directly hurts NASA's mission and doesn't hurt Paul Graham's mission, but NASA shouldn't spend any more effort on reducing the Earth's gravitational field than Paul Graham.
+
+I agreed that tractability needs to be addressed, but ...
+]
+
+I felt like—we were in a coal-mine, and my favorite one of our canaries just died, and I was freaking out about this, and represenatives of the Caliphate (Yudkowsky, Alexander, Anna, Steven) were like, Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission.
+
+And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect _you_ to care. The problem is not the dead bird; the problem is what the bird is _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue?
+
+Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [actually turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break.
+
+[TODO:
+ * I had posted a linkpost to "No, it's not The Incentives—it's You", which generated a lot of discussion, and Jessica (17 June) identified Ray's comments as the last straw.
+
+> LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win.
+>
+> Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a _lot_ of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it).
+
+ * posting on Less Wrong was harm-reduction; the only way to get people to stick up for truth would be to convert them to _a whole new worldview_; Jessica proposed the idea of a new discussion forum
+ * Ben thought that trying to discuss with the other mods would be a good intermediate step, after we clarified to ourselves what was going on; talking to other mods might be "good practice in the same way that the Eliezer initiative was good practice"; Ben is less optimistic about harm reduction; "Drowning Children Are Rare" was barely net-upvoted, and participating was endorsing the karma and curation systems
+ * David Xu's comment on "The Incentives" seems important?
+ * secret posse member: Ray's attitude on "Is being good costly?"
+ * Jessica: scortched-earth campaign should mostly be in meatspace social reality
+ * my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX)
+
+> I'm also not sure if I'm sufficiently clued in to what Ben and Jessica are modeling as Blight, a coherent problem, as opposed to two or six individual incidents that seem really egregious in a vaguely similar way that seems like it would have been less likely in 2009??
+
+ * Vassar: "Literally nothing Ben is doing is as aggressive as the basic 101 pitch for EA."
+ * Ben: we should be creating clarity about "position X is not a strawman within the group", rather than trying to scapegoat individuals
+ * my scuffle with Ruby on "Causal vs. Social Reality" (my previous interaction with Ruby had been on the LW FAQ; maybe he couldn't let me "win" again so quickly?)
+ * it gets worse: https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality#NbrPdyBFPi4hj5zQW
+ * Ben's comment: "Wow, he's really overtly arguing that people should lie to him to protect his feelings."
+ * Jessica: "tone arguments are always about privileged people protecting their feelings, and are thus in bad faith. Therefore, engaging with a tone argument as if it's in good faith is a fool's game, like playing chess with a pigeon. Either don't engage, or seek to embarrass them intentionally."
+ * there's no point at being mad at MOPs
+ * me (1 Jul): I'm a _little bit_ mad, because I specialize in cognitive and discourse strategies that are _extremely susceptible_ to being trolled like this
+ * me to "Wilhelm" 1 Jul: "I'd rather not get into fights on LW, but at least I'm 2-0-1"
+ * "collaborative truth seeking" but (as Michael pointed out) politeness looks nothing like Aumann agreement
+ * 2 Jul: Jessica is surprised by how well "Self-consciousness wants to make everything about itself" worked; theory about people not wanting to be held to standards that others aren't being held to
+ * Michael: Jessica's example made it clear she was on the side of social justice
+ * secret posse member: level of social-justice talk makes me not want to interact with this post in any way
+]
+
+[TODO: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
+
+[TODO: "AI Timelines Scam"
+ * I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions
+ * Ben: "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp."
+ * 11 Jul: I think the law does count _mens rea_ as a thing: we do discriminate between vehicular manslaughter and first-degree murder, because traffic accidents are less disincentivizable than offing one's enemies
+ * call with Michael about GiveWell vs. the Pope
+]
+
+[TODO: secret thread with Ruby; "uh, guys??" to Steven and Anna; people say "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any _particular_ criticism as insufficiently tactful.]
+
+[TODO: "progress towards discussing the real thing"
+ * Jessica acks Ray's point of "why are you using court language if you don't intend to blame/punish"
+ * Michael 20 Jul: court language is our way of saying non-engagement isn't an option
+ * Michael: we need to get better at using SJW blamey language
+ * secret posse member: that's you-have-become-the-abyss terrifying suggestion
+ * Ben thinks SJW blame is obviously good
+]
+
+[TODO: epistemic defense meeting;
+ * I ended up crying at one point and left the room for while
+ * Jessica's summary: "Zack was a helpful emotionally expressive and articulate victim. It seemed like there was consensus that "yeah, it would be better if people like Zack could be warned somehow that LW isn't doing the general sanity-maximization thing anymore"."
+ * Vaniver admitting LW is more of a recruiting funnel for MIRI
+ * I needed to exhaust all possible avenues of appeal before it became real to me; the first morning where "rationalists ... them" felt more natural than "rationalists ... us"
+]
+
+[TODO: Michael Vassar and the theory of optimal gossip; make sure to include the part about Michael threatening to sue]
+
+[TODO: State of Steven]
+
+I still wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constraint; I was still bound by internal silencing-chains. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise the "rationalists" collectively, and—more philosophy-of-language blogging!
+
+In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
+
+In September 2019's "Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists" [TODO: ... I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering]
+
+In October 2019's "Algorithms of Deception!", I explained [TODO: ...]
+
+Also in October 2019, in "Maybe Lying Doesn't Exist" [TODO: ... I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry!]
+
+------
+
+I continued to take note of signs of contemporary Yudkowsky visibly not being the same author who wrote the Sequences. In August 2019, [he Tweeted](https://twitter.com/ESYudkowsky/status/1164241431629721600):
+
+> I am actively hostile to neoreaction and the alt-right, routinely block such people from commenting on my Twitter feed, and make it clear that I do not welcome support from those quarters. Anyone insinuating otherwise is uninformed, or deceptive.
+
+[I pointed out that](https://twitter.com/zackmdavis/status/1164259164819845120) the people who smear him as a right-wing Bad Guy do so _in order to_ extract these kinds of statements of political alignment as concessions; his own timeless decision theory would seem to recommend ignoring them rather than paying even this small [Danegeld](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/).
+
+When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is actually false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension), and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By linking to ["Kolmogorov Complicity"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors?
+
+The paragraph from "Kolmogorov Complicity" that I was thinking of was (bolding mine):
+
+> Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, "If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?" **Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won't be very convincing.** Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly.
+
+I perceived a pattern where people who are in trouble with the orthodoxy feel an incentive to buy their own safety by denouncing _other_ heretics: not just disagreeing with the other heretics _because those other heresies are in fact mistaken_, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld.
+
+Suppose there are five true heresies, but anyone who's on the record believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community, because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy-of-categorization right in full generality (even though he'd [written](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) [exhaustively](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) about the correct way, and he and I have a common enemy in the social-justice egregore): _he couldn't afford to_. He'd already [spent his Overton budget on anti-feminism](https://slatestarcodex.com/2015/01/01/untitled/).
+
+Scott (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was _awfully_ tempting to try to jump there. (Of course, it would be _better_ if there was a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the falling back to the one-heresy-per-thinker equilibrium.)
+
+Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad (and that it would be bad if I were doing that without "intending" to). I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had _learned stuff_ from reading far-right authors (most notably Moldbug), and from talking with my very smart neoreactionary (and former _Less Wrong_-er) friend. I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](#less-precise-is-more-violent) (when I was talking about criticizing "rationalists").
+
+Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.lesswrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s?commentId=TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.)
+
+I did believe that Yudkowsky believed that neoreaction was not worth engaging with. I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this-and-such author." I wasn't accusing Yudkowsky of being insincere.
+
+What I _did_ think was that the need to keep up appearances of not-being-a-right-wing-Bad-Guy was a pretty serious distortion on people's beliefs, because there are at least a few questions-of-fact where believing the correct answer can, in today's political environment, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to _notice that this is a rationality problem_, and to _not actively make the problem worse_, and I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that the extent to which I think I've learned valuable things from Moldbug, made me less welcome in Yudkowsky's fiefdom.
+
+Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" _as stated_, but "I do not welcome support from those quarters" still seemed like a _pointlessly_ partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's _obviously not my fault_."
+
+Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to _also_ to acknowledge, _e.g._, racial IQ differences?
+
+I agreed that it would be helpful, but realistically, I didn't see why Yudkowsky should want to poke the race-differences hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, you either become an [evidence-filtering clever arguer](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence), or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.)
+
+It was as if there was a "Say Everything" attractor, and a "Say Nothing" attractor, and _my_ incentives were pushing me towards the "Say Everything" attractor—but that was only because I had [Something to Protect](/2019/Jul/the-source-of-our-power/) in the forbidden zone and I was a good programmer (who could therefore expect to be employable somewhere, just as [James Damore eventually found another job](https://twitter.com/JamesADamore/status/1034623633174478849)). Anyone in less extreme circumstances would find themselves being pushed to the "Say Nothing" attractor.
+
+It was instructive to compare this new disavowal of neoreaction with one from 2013 (quoted by [Moldbug](https://www.unqualified-reservations.org/2013/11/mr-jones-is-rather-concerned/) and [others](https://medium.com/@2045singularity/white-supremacist-futurism-81be3fa7020d)[^linkrot]), in response to a _TechCrunch_ article citing former MIRI employee Michael Anissimov's neoreactionary blog _More Right_:
+
+[^linkrot]: The original _TechCrunch_ comment would seem to have succumbed to [linkrot](https://www.gwern.net/Archiving-URLs#link-rot).
+
+> "More Right" is not any kind of acknowledged offspring of Less Wrong nor is it so much as linked to by the Less Wrong site. We are not part of a neoreactionary conspiracy. We are and have been explicitly pro-Enlightenment, as such, under that name. Should it be the case that any neoreactionary is citing me as a supporter of their ideas, I was never asked and never gave my consent. [...]
+>
+> Also to be clear: I try not to dismiss ideas out of hand due to fear of public unpopularity. However I found Scott Alexander's takedown of neoreaction convincing and thus I shrugged and didn't bother to investigate further.
+
+My "negotiating with terrorists" criticism did _not_ apply to the 2013 statement. "More Right" _was_ brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, _and_ the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing [the McCarthyist persecution](https://www.unqualified-reservations.org/2013/09/technology-communism-and-brown-scare/).
+
+The question was, what had specifically happened in the last six years to shift Eliezer's opinion on neoreaction from (paraphrased) "Scott says it's wrong, so I stopped reading" to (verbatim) "actively hostile"? Note especially the inversion from (both paraphrased) "I don't support neoreaction" (fine, of course) to "I don't even want _them_ supporting _me_" [(_?!?!_)](https://twitter.com/zackmdavis/status/1164329446314135552).[^them-supporting-me]
+
+[^them-supporting-me]: Humans with very different views on politics nevertheless have a common interest in not being transformed into paperclips!
+
+Did Yudkowsky get _new information_ about neoreaction's hidden Badness parameter sometime between 2019, or did moral coercion on him from the left intensify (because Trump and [because Berkeley](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/))? My bet was on the latter.
+
+However it happened, it didn't seem like the brain damage was limited to "political" topics, either. In November, we saw another example of Yudkowsky destroying language for the sake of politeness, this time the non-Culture-War context of him [_trying to wirehead his fiction subreddit by suppressing criticism-in-general_](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/).
+
+That's _my_ characterization, of course: the post itself talks about "reducing negativity". [In a followup comment, Yudkowsky wrote](https://www.reddit.com/r/rational/comments/dvkv41/meta_reducing_negativity_on_rrational/f7fs88l/) (bolding mine):
+
+> On discussion threads for a work's particular chapter, people may debate the well-executedness of some particular feature of that work's particular chapter. Comments saying that nobody should enjoy this whole work are still verboten. **Replies here should still follow the etiquette of saying "Mileage varied: I thought character X seemed stupid to me" rather than saying "No, character X was actually quite stupid."**
+
+But ... "I thought X seemed Y to me"[^pleonasm] and "X is Y" _do not mean the same thing_. [The map is not the territory](https://www.lesswrong.com/posts/KJ9MFBPwXGwNpadf2/skill-the-map-is-not-the-territory). [The quotation is not the referent](https://www.lesswrong.com/posts/np3tP49caG4uFLRbS/the-quotation-is-not-the-referent). [The planning algorithm that maximizes the probability of doing a thing is different from the algorithm that maximizes the probability of having "tried" to do the thing](https://www.lesswrong.com/posts/WLJwTJ7uGPA5Qphbp/trying-to-try). [If my character is actually quite stupid, I want to believe that my character is actually quite stupid.](https://www.lesswrong.com/tag/litany-of-tarski)
+
+[^pleonasm]: The pleonasm here ("to me" being redundant with "I thought") is especially galling coming from someone who's usually a good writer!
+
+It might seem like a little thing of no significance—requiring "I" statements is commonplace in therapy groups and corporate sensitivity training—but this little thing _coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space_ made a pattern click. If everyone is forced to only make narcissistic claims about their map ("_I_ think", "_I_ feel"), and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because [disagreement is disrespect](http://www.overcomingbias.com/2008/09/disagreement-is.html)), that's great for reducing social conflict, but it's not great for the kind of collective information processing that actually accomplishes cognitive work, like good literary criticism. A rationalist space _needs to be able to talk about the territory_.
+
+I understand that Yudkowsky wouldn't agree with that characterization, and to be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" is one of the major considerations to be balanced. But I think (_I_ think) it's also fair to note that (as we had seen on _Less Wrong_ earlier that year), lip service is cheap. It's easy to _say_, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did.
+
+"Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is _M_, the _i_-th commenter's estimate of mistakenness has an error term of _E<sub>i</sub>_, and commenters leave a negative comment when their estimate _M_ + _E<sub>i</sub>_ is greater than their threshold for commenting _T<sub>i</sub>_, then the comments that get posted will have been selected for erroneous criticism (high _E<sub>i</sub>_) and commmenter chattiness (low _T<sub>i</sub>_).
+
+I can imagine some young person who really liked _Harry Potter and the Methods_ being intimidated by the math notation, and uncritically accepting this wisdom from the great Eliezer Yudkowsky as a reason to be less critical, specifically. But a somewhat less young person who isn't intimidated by math should notice that the the math here is just [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean). The same argument applies to praise!
+
+What I would hope for from a rationality teacher and a rationality community, would be efforts to instill the _general_ skill of modeling things like regression to the mean and selection effects, as part of the general project of having a discourse that does collective information-processing.
+
+And from the way Yudkowsky writes these days, it looks like he's ... not interested in collective information-processing? Or that he doesn't actually believe that's a real thing? "Credibly helpful unsolicited criticism should be delivered in private," he writes! I agree that the positive purpose of public criticism isn't solely to help the author. (If it were, there would be no reason for anyone but the author to read it.) But readers _do_ benefit from insightful critical commentary. (If they didn't, why would they read the comments section?) When I read a story, and am interested in reading the comments _about_ a story, it's because _I want to know what other readers were actually thinking about the work_. I don't _want_ other people to self-censor comments on any plot holes or [Fridge Logic](https://tvtropes.org/pmwiki/pmwiki.php/Main/FridgeLogic) they noticed for fear of dampening someone else's enjoyment or hurting the author's feelings.
+
+Yudkowsky claims that criticism should be given in private because then the target "may find it much more credible that you meant only to help them, and weren't trying to gain status by pushing them down in public." I'll buy this as a reason why credibly _altruistic_ unsolicited criticism should be delivered in private. Indeed, meaning _only_ to help the target just doesn't seem like a plausible critic motivation in most cases. But the fact that critics typically have non-altruistic motives, doesn't mean criticism isn't helpful. In order to incentivize good criticism, you _want_ people to be rewarded with status for making good criticisms! You'd have to be some sort of communist to disagree with this.
+
+There's a striking contrast between the Yudkowsky of 2019 who wrote the "Reducing Negativity" post, and an earlier Yudkowsky (from even before the Sequences) who maintained [a page on Crocker's rules](http://sl4.org/crocker.html): if you declare that you operate under Crocker's rules, you're consenting to other people optimizing their speech for conveying information rather than being nice to you. If someone calls you an idiot, that's not an "insult"; they're just informing you about the fact that you're an idiot, and you should plausibly thank them for the tip. (If you _were_ an idiot, wouldn't you be better off knowing rather than not-knowing?)
+
+It's of course important to stress that Crocker's rules are _opt in_ on the part of the _receiver_; it's not a license to unilaterally be rude to other people. Adopting Crocker's rules as a community-level norm on an open web forum does not seem like it would end well.
+
+Still, there's something precious about a culture where people appreciate the _obvious normative ideal_ underlying Crocker's rules, even if social animals can't reliably live up to the normative ideal. Speech is for conveying information. People can say things—even things about me or my work—not as a command, or as a reward or punishment, but just to establish a correspondence between words and the world: a map that reflects a territory.
+
+Appreciation of this obvious normative ideal seems almost entirely absent from Yudkowsky's modern work—as if he's given up on the idea that using Speech in public in order to reason is useful or possible.
+
+The "Reducing Negativity" post also warns against the failure mode of attempted "author telepathy": _attributing_ bad motives to authors and treating those attributions as fact without accounting for uncertainty or distinguishing observations from inferences. I should be explicit, then: when I say negative things about Yudkowsky's state of mind, like it's "as if he's given up on the idea that reasoning in public is useful or possible", that's definitely an inference, not an observation. I definitely don't think Yudkowsky _thinks of himself_ as having given up on Speech _in those words_.
+
+Why attribute motives to people that they don't attribute to themselves, then? Because I need to, in order to make sense of the world. Words aren't imbued with intrinsic "meaning"; just to _interpret_ text entails building some model of the mind on the other side.
+
+The text that Yudkowsky emitted in 2007–2009 made me who I am. The text that Yudkowsky has emitted since at least March 2016 _looks like_ it's being generated by a different and _much less trustworthy_ process. According to the methods I was taught in 2007–2009, I have a _duty_ to notice the difference, and try to make sense of the change—even if I'm not a superhuman neuroscience AI and have no hope of getting it right in detail. And I have a right to try to describe the change I'm seeing to you.
+
+_Good_ criticism is hard. _Accurately_ inferring authorial ["intent"](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) is much harder. There is certainly no shortage of bullies in the world eager to make _bad_ criticism or _inaccurately_ infer authorial intent in order to achieve their social goals. But I don't think that's a good reason to give up on _trying_ to do good criticism and accurate intent-attribution. If there's any hope for humans to think together and work together, it has to go though distiguishing good criticism from bad criticism, and treating them differently. Suppressing criticism-in-general is intellectual suicide.
+
+-----
+
+On 3 November 2019, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.
+
+I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for _what_ variables you paid attention to, to be motivated by consequences. But _given_ the subspace that's relevant to your interests, you want to run an epistemically legitimate clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences.
+
+Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues _potentially_ arise if you allow consequentialist concerns to affect your epistemics.
+
+But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley was trying to mess with me.)
+
+Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist.
+
+The reason it _should_ have been safe to write was because Explaining Things is Good. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully _tell the true story_ about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_."
+
+So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it's wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was "the nuclear option" in a way that I couldn't credibly cancel with "But I'm just telling a true story about a thing that was important to me that actually happened" disclaimers. If you're slowly-but-surely gaining territory in a conventional war, _suddenly_ escalating to nukes seems pointlessly destructive. This metaphor is horribly non-normative ([arguing is not a punishment!](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html) carefully telling a true story _about_ an argument is not a nuke!), but I didn't know how to make it stably go away.
+
+A more motivationally-stable compromise would be to try to split off whatever _generalizable insights_ that would have been part of the story into their own posts that don't make it personal. ["Heads I Win, Tails?—Never Heard of Her"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff I was worried about, without making it personal, even if, secretly, it actually was personal.
+
+Ben replied that it didn't seem like it was clear to me that I was a victim of systemic abuse, and that I was trying to figure out whether I was being fair to my abuser. He thought if I could internalize that, I would be able to forgive myself a lot of messiness, which would reduce the perceived complexity of the problem.
+
+I said I would bite that bullet: yes! Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a _lame excuse_.
+
+(This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if that word _ever_ meant anything", and while I agreed that they were pointing to an important way in which things were messed up, I was still sympathetic to the Caliphate-defender's reply that the Vassarite usage of "fraud" was motte-and-baileying between vastly different senses of _fraud_; I wanted to do _more work_ to formulate a _more precise theory_ of the psychology of deception to describe exactly how things are messed up a way that wouldn't be susceptible to the motte-and-bailey charge.)
+
+[TODO: Ziz's protest; Somni? ("peek behind the fog of war" 6 Feb)]
+
+[TODO: rude maps]
+
+[TODO: a culture that has gone off the rails; my warning points to Vaniver]
+
+[TODO: complicity and friendship]
+
+[TODO: affordance widths]
+
+[TODO: I had a productive winter blogging vacation in December 2019
+pull the trigger on "On the Argumentative Form"; I was worried about leaking info from private conversations, but I'm in the clear "That's your hobbyhorse" is an observation anyone could make from content alone]
+
+[TODO: "Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans]
+
+[TODO: plan to reach out to Rick 14 December
+Anna's reply 21 December
+22 December: I ask to postpone this
+Michael asks for me to acknowledge that my sense of opportunity is driven by politics
+discussion of what is "political"
+mention to Anna that I was postponing in order to make it non-salesy