+One of the better explanations of this that I know of was (again, as usual) authored by Yudkowsky in 2007, in a post titled ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased).
+
+The Yudkowsky of 2007 starts by quoting a passage from George Orwell's _1984_, in which O'Brien (a loyal member of the ruling Party in the totalitarian state depicted in the novel) burns a photograph of Jones, Aaronson, and Rutherford (former Party leaders whose existence has been censored from the historical record). Immediately after burning the photograph, O'Brien denies that it ever existed.
+
+The Yudkowsky of 2007 continues—it's again worth quoting at length—
+
+> What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—_unhappy?_ Surely, _true_ wisdom would be _second-order_ rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness.
+>
+> Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.
+>
+> [...]
+>
+> For second-order rationality to be genuinely _rational_, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality. If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting. I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.
+>
+> You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception.
+>
+> The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity.
+>
+> [...]
+>
+> One of chief pieces of advice I give to aspiring rationalists is "Don't try to be clever." And, "Listen to those quiet, nagging doubts." If you don't know, you don't know _what_ you don't know, you don't know how _much_ you don't know, and you don't know how much you _needed_ to know.
+>
+> There is no second-order rationality. There is only a blind leap into what may or may not be a flaming lava pit. Once you _know_, it will be too late for blindness.
+
+Looking back on this from 2022, the only criticism I have is that Yudkowsky was too optimistic to "doubt such a lunatic dislocation in the mind could really happen." In some ways, people's actual behavior is _worse_ than what Orwell depicted. The Party of Orwell's _1984_ covers its tracks: O'Brien takes care to burn the photograph _before_ denying memory of it, because it would be _too_ absurd for him to act like the photo had never existed while it was still right there in front of him.
+
+In contrast, Yudkowsky's Caliphate of the current year _doesn't even bother covering its tracks_. Turns out, it doesn't need to! People just don't remember things!
+
+The [flexibility of natural language is a _huge_ help here](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). If the caliph were to _directly_ contradict himself in simple, unambiguous language—to go from "Oceania is not at war with Eastasia" to "Oceania is at war with Eastasia" without any acknowledgement that anything had changed—_then_ too many people might notice that those two sentences are the same except that one has the word _not_ in it. What's a caliph to do, if he wants to declare war on Eastasia without acknowledging or taking responsibility for the decision to do so?
+
+The solution is simple: just—use more words! Then if someone tries to argue that you've _effectively_ contradicted yourself, accuse them of being uncharitable and failing to model the Other. You can't lose! Anything can be consistent with anything if you apply a sufficiently charitable reading; whether Oceania is at war with Eastasia depends on how you choose to draw the category boundaries of "at war."
+
+Thus, O'Brien should envy Yudkowsky: burning the photograph turns out to be unnecessary! ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) is _still up_ and not retracted, but that didn't stop the Yudkowsky of 2016 from pivoting to ["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) when that became a politically favorable thing to say. I claim that these posts _effectively_ contradict each other. The former explains why men who fantasize about being women are _not only_ out of luck given forseeable technology, but _also_ that their desires may not even be coherent (!), whereas the latter claims that men who wish they were women may, in fact, _already_ be women in some unspecified psychological sense.
+
+_Technically_, these don't _strictly_ contradict each other: I can't point to a sentence from each that are the same except one includes the word _not_. (And even if there were such sentences, I wouldn't be able to prove that the other words were being used in the same sense in both sentences.) One _could_ try to argue that "Changing Emotions" is addressing cis men with a weird sex-change fantasy, whereas the "ones with penises are actually women" claim was about trans women, which are a different thing.
+
+_Realistically_ ... no. These two posts _can't_ both be right. In itself, this isn't a problem: people change their minds sometimes, which is great! But when people _actually_ change their minds (as opposed to merely changing what they say in public for political reasons), you expect them to be able to _acknowledge_ the change, and hopefully explain what new evidence or reasoning brought them around. If they can't even _acknowledge the change_, that's pretty Orwellian, like O'Brien trying to claim that the photograph is of different men who just coincidentally happen to look like Jones, Aaronson, and Rutherford.
+
+And if a little bit of Orwellianism on specific, narrow, highly-charged topics might be forgiven—because everyone else in your Society is doing it, and you would be punished for not playing along, an [inadequate equilibrium](https://equilibriabook.com/) that no one actor has the power to defy—might we not expect the father of the "rationalists" to stand his ground on the core theses of his ideology, like whether telling the truth is good?
+
+I guess not! ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased) is _still up_ and not retracted, but that didn't stop Yudkowsky from [endorsing everything Xu said](https://twitter.com/ESYudkowsky/status/1436025983522381827) about "whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is 'rational' to rule that such inferences should be avoided when possible" being different cruxes than "whether 'rational' thinking is 'worth it'".
+
+I don't doubt Yudkowsky could come up with some clever casuistry why, _technically_, the text he wrote in 2007 and the text he endorsed in 2021 don't contradict each other. But _realistically_ ... again, no.
+
+I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's _clearly labeled as such_ would be fine. (There's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive; if you want other perspectives, you'll have to read other authors and think it through for yourself.")
+
+What's _not_ fine is selective argumentation while claiming "confidence in [your] own ability to independently invent everything important that would be on the other side of the filter and check it [yourself] before speaking" when you _very obviously have done no such thing_.
+
+------
+
+In October 2021, Jessica Taylor [published a post about her experiences at MIRI](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe), making analogies between sketchy social pressures she had experienced in the core rationalist community (around short AI timelines, secrecy, deference to community leaders, _&c._) and those reported in [Zoe Cramer's recent account of her time at Leverage Research](https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b).
+
+Scott Alexander posted [a comment claiming to add important context](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=4j2GS4yWu6stGvZWs), essentially blaming Jessica's problems on her association with Michael Vassar, to the point of describing her psychotic episode as a "Vassar-related phenomenon" (!). Alexander accused Vassar of trying "'jailbreak'" people from normal social reality, which "involve[d] making them paranoid about MIRI/CFAR and convincing them to take lots of drugs". Yudkowsky posted [a comment that uncritically validated Scott's reliability as a narrator](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=x5ajGhggHky9Moyr8).
+
+To me, this looked like raw factional conflict: Jessica had some negative-valence things to say about the Caliphate, so Caliphate leaders moved in to discredit her by association. Quite effectively, as it turned out: the karma score on Jessica's post dropped by more than half, while Alexander's comment got voted up to more than 380 karma. (The fact that Scott said ["it's fair for the community to try to defend itself"](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=qsEMmdo6DKscvBvDr) in ensuing back-and-forth suggests that he also saw the conversation as an adversarial one, even if he thought Jessica shot first.)
+
+I explained [why I thought Scott was being unfair](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=GzqsWxEp8uLcZinTy) (and [offered textual evidence](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=yKo2uuCcwJxbwwyBw) against the silly claim that Michael was _trying_ to drive Jessica crazy).
+
+Scott [disagreed](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=XpEpzvHPLkCH7W7jS) that joining the "Vassarites"[^vassarite-scare-quotes] wasn't harmful to me. He revealed that during my March 2019 problems, he had emailed my posse:
+
+> accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick.
+
+[^vassarite-scare-quotes]: Scare quotes because "Vassarite" seems likely to be Alexander's coinage; we didn't call ourselves that.
+
+But I will _absolutely_ bite the bullet on it being correct to freak out about learning your entire Society is corrupt and gaslighting (as I explained to Scott in an asynchronous 22–27 October 2021 conversation on Discord).
+
+Imagine living in the Society of Alexander's ["Kolmogorov Complicity and the Parable of Lightning"](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) (which I keep linking) in the brief period when the lightening taboo is being established, trying to make sense of everyone you know suddenly deciding, seemingly in lockstep, that thunder comes before lightning. (When you try to point out that this isn't true and no one believed it five years ago, they point out that it depends on what you mean by the word 'before'.)
+
+Eventually, you would get used to it, but at first, I think this would be legitimately pretty upsetting! If you were already an emotionally fragile person, it might even escalate to a psychiatric emergency through the specific mechanism "everyone I trust is inexplicably lying about lightning → stress → sleep deprivation → temporary psychosis". That is, it's not that Society being corrupt directly causes mental ilness—that would be silly—but confronting a corrupt Society is very stressful, and that can [snowball into](https://lorienpsych.com/2020/11/11/ontology-of-psychiatric-conditions-dynamic-systems/) things like lost sleep, and sleep is [really](https://www.jneurosci.org/content/34/27/9134.short) [biologically important](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6048360/).
+
+This is a pretty bad situation to be in—to be faced with the question, "Am _I_ crazy, or is _everyone else_ crazy?" But one thing that would make it slightly less bad is if you had a few allies, or even just _an_ ally—someone to confirm that the obvious answer, "It's not you," is, in fact, obvious.
+
+But in a world where [everyone who's anyone](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) agrees that thunder comes before lightning—including all the savvy consequentialists who realize that being someone who's anyone is an instrumentally convergent strategy for acquiring influence—anyone who would be so imprudent to take your everyone-is-lying-about-lightning concerns seriously, would have to be someone with ... a nonstandard relationship to social reality. Someone meta-savvy to the process of people wanting to be someone who's anyone. Someone who, honestly, is probably some kind of _major asshole_. Someone like—Michael Vassar!
+
+From the perspective of an outside observer playing a Kolmogorov-complicity strategy, your plight might look like "innocent person suffering from mental illness in need of treatment/management", and your ally as "bad influence who is egging the innocent person on for their own unknown but probably nefarious reasons". If that outside observer chooses to draw the category boundaries of "mental illness" appropriately, that story might even be true. So why not quit making such a fuss, and accept treatment? Why fight, if fighting comes at a personal cost? Why not submit?
+
+I had my answer. But I wasn't sure that Scott would understand.
+
+To assess whether joining the "Vassarites" had been harmful to me, one would need to answer: as compared to what? In the counterfactual where Michael vanished from the world in 2016, I think I would have been just as upset about the same things for the same reasons, but with fewer allies and fewer ideas to make sense of what was going on in my social environment.
+
+Additionally, it was really obnoxious when people had tried to use my association with Michael to try to discredit the content of what I was saying—interpreting me as Michael's pawn. Gwen, one of the "Zizians", in a blog post about her grievances against CfAR, has [a section on "Attempting to erase the agency of everyone who agrees with our position"](https://everythingtosaveit.how/case-study-cfar/#attempting-to-erase-the-agency-of-everyone-who-agrees-with-our-position), complaining about how people try to cast her and Somni and Emma as Ziz's minions, rather than acknowledging that they're separate people with their own ideas who had good reasons to work together. I empathized a lot with this. My thing, and separately Ben Hoffman's [thing about Effective Altruism](http://benjaminrosshoffman.com/drowning-children-rare/), and separately Jessica's thing in the OP, didn't really have a whole lot to do with each other, except as symptoms of "the so-called 'rationalist' community is not doing what it says on the tin" (which itself wasn't a very specific diagnosis). But insofar as our separate problems did have a hypothesized common root cause, it made sense for us to talk to each other and to Michael about them.
+
+Was Michael using me, at various times? I mean, probably. But just as much, _I was using him_. Particularly with the November 2018–April 2019 thing (where I and the "Vassarite" posse kept repeatedly pestering Scott and Eliezer to clarify that categories aren't arbitrary): that was the "Vassarites" doing an _enormous_ favor for _me_ and _my_ agenda. (If Michael and crew hadn't had my back, I wouldn't have been anti-social enough to keep escalating.) And here Scott was trying to get away with claiming that _they_ were making my situation worse? That's _absurd_. Had he no shame?
+
+I _did_, I admitted, have some specific, nuanced concerns—especially since the December 2020 psychiatric disaster, with some nagging doubts beforehand—about ways in which being an inner-circle "Vassarite" might be bad for someone, but at the moment, I was focused on rebutting Scott's story, which was _silly_. A defense lawyer has an easier job than a rationalist—if the prosecution makes a terrible case, you can just destroy it, without it being your job to worry about whether your client is separately guilty of vaguely similar crimes that the incompetent prosecution can't prove.
+
+When Scott expressed concern about the group-yelling behavior that [Ziz had described in a blog comment](https://sinceriously.fyi/punching-evil/#comment-2345) ("They spent 8 hours shouting at me, gaslighting me") and [Yudkowsky had described on Twitter](https://twitter.com/ESYudkowsky/status/1356494768960798720) ("When MichaelV and co. try to run a 'multiple people yelling at you' operation on me, I experience that as 'lol, look at all that pressure' instead _feeling pressured_"), I clarified that that thing was very different from what it was like to actually be friends with them. The everyone-yelling operation seemed like a new innovation (that I didn't like) that they wield as a psychological weapon only against people who they think are operating in bad faith? In the present conversation with Scott, I had been focusing on rebutting the claim that my February–April 2017 (major) and March 2019 (minor) psych problems were caused by the "Vassarites", because with regard to those _specific_ incidents, the charge was absurd and false. But, well ... my January 2021 (minor) psych problems actually _were_ the result of being on the receiving end of the everyone-yelling thing. I briefly described the December 2020 "Lenore" disaster, and in particular the part where Michael/Jessica/Jack yelled at me.
+
+Scott said that based on my and others' testimony, he was updating away from Vassar being as involved in psychotic breaks than he thought, but towards thinking Vassar was worse in other ways than he thought. He felt sorry for my bad December 2020/January 2021 experience—so much that he could feel it through the triumphant vindication at getting conifrmation that the Vassarites were behaving badly in ways he couldn't previously prove.
+
+Great, I said, I was happy to provide information to help hold people (including Michael as a particular instance of "people") accountable for the specific bad things that they're actually guilty of, rather than scapegoated as a Bad Man with mysterious witch powers.
+
+Scott supposed that he should also be investigating "Lenore", who he sarcastically remarked was liable to be yet another case of someone having a psychotic break just as she was getting close to the Vassarites, but that somehow there's no plausible connection between those two things.
+
+I pointed out that that's exactly what one would expect if the Vassar/breakdown correlation was mostly a selection effect rather than causal—that is, if the causal graph was the fork "prone-to-psychosis ← underlying-bipolar-ish-condition → gets-along-with-Michael".
+
+I had also had a sleep-deprivation-induced-psychotic-break-with-hospitalization in February 2013, and shortly thereafter, I remember Anna remarking that I was sounding a lot like Michael. But I hadn't been talking to Michael at all beforehand! (My previous email conversation with him had been in 2010.) So what could Anna's brain have been picking up on, when she said that? My guess: there was some underlying dimension of psychological variation (psychoticism? bipolar?—you tell me; this is supposed to be Scott's professional specialty) where Michael and I were already weird/crazy in similar ways, and sufficiently bad stressors could push me further along that dimension (enough for Anna to notice). Was Scott also going to blame Yudkowsky for making people [autistic](https://twitter.com/ESYudkowsky/status/1633396201427984384)?
+
+Concerning the lightning parable, Scott said that from his perspective, the point of "Kolmogorov Complicity" was that, yes, people can be crazy, but that we have to live in Society without spending all our time freaking out about it. If, back in the days of my ideological anti-sexism, the first ten Yudkowsky posts I had read had said that men and women are psychologically different for biological reasons and that anyone who denies this is a mind-killed idiot—which Scott assumed Yudkowsky did think—he could imagine me being turned off. It was probably good for me and the world that that wasn't my first ten experiences of the rationalist community.
+
+I agreed that this was a real concern. (I had been so enamored with Yudkowsky's philosophy-of-science writing that there was no chance of _me_ bouncing on account of the sexism that I perceived, but I wasn't the marginal case.) There are definitely good reasons to tread carefully when trying to add sensitive-in-our-culture content to Society's shared map. But I didn't think treading carefully should take precedence over _getting the goddamned right answer_.
+
+As an example of what I thought treading carefully but getting the goddamned right answer looked like, I was really proud of [my April 2020 review of Charles Murray's _Human Diversity_](/2020/Apr/book-review-human-diversity/). I definitely wasn't saying, Emil Kirkegaard-style, "the black/white IQ gap is genetic, anyone who denies this is a mind-killed idiot." Rather, _first_ I reviewed the Science in the book, and _then_ I talked about the politics surrounding Murray's reputation and the technical reasons for believing that the gap is real and partly genetic, and _then_ I went meta on the problem and explained why it makes sense that political forces make this hard to talk about. I thought this was how one goes about mapping the territory without being a moral monster with respect to one's pre-Dark Enlightenment morality. (And [Emil was satisfied, too](https://twitter.com/KirkegaardEmil/status/1425334398484983813).)
+
+------
+
+At the end of the September 2021 Twitter altercation, I [said that I was upgrading my "mute" of @ESYudkowsky to a "block"](https://twitter.com/zackmdavis/status/1435468183268331525). Better to just leave, rather than continue to hang around in his mentions trying (consciously [or otherwise](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)) to pick fights, like a crazy ex-girlfriend. (["I have no underlying issues to address; I'm certifiably cute, and adorably obsessed"](https://www.youtube.com/watch?v=UMHz6FiRzS8) ...)
+
+I did end up impulsively writing one more comment on one of his Facebook posts (with an aside at the top about whether that was OK), and Yudkowsky [said that Twitter looked worse for me than Facebook](/images/yudkowsky-twitter_is_worse_for_you.png)—the implication being that I _did_ still have commenting privileges as far as he was concerned. Good. I'm proud to be a crazy ex-girlfriend who knows she's crazy and _voluntarily_ deletes your number from her phone, rather than the crazy ex-girlfriend you need to block.
+
+I still had more things to say—a reply to the February 2021 post on pronoun reform, and the present memoir telling this Whole Dumb Story—but those could be written and published unilaterally. Given that we clearly weren't going to get to clarity and resolution, I didn't want to bid for any more of my ex-hero's attention and waste more of his time (valuable time, _limited_ time); I still owed him for creating me.
+
+Leaving a personality cult is hard. As I struggled to write, I noticed that I was wasting a lot of cycles worrying about what he'd think of me, rather than saying the things I needed to say. I knew it was pathetic that my religion was so bottlenecked on _one guy_—particularly since the holy texts themselves (written by that one guy) [explicitly said not to do that](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus)—but unwinding those psychological patterns was still a challenge.
+
+An illustration of the psychological dynamics at play: on an August 2021 EA Forum post about demandingness objections to longtermism, Yudkowsky [commented that](https://forum.effectivealtruism.org/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism?commentId=Kga3KGx6WAhkNM3qY) he was "broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism [...] as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seem[ed] to be the main alternative."
+
+I found the comment reassuring regarding the extent or lack thereof of my own contributions to the great common task—and that's the problem: I found the _comment_ reassuring, not the _argument_. It would make sense to be reassured by the claim (if true) that human psychology is such that I don't realistically have the option of devoting more than 25% of myself to the great common task. It does _not_ make sense to be reassured that _Eliezer Yudkowsky said he's broadly fine with it_. That's just being a personality-cultist.
+
+In January 2022, in an attempt to deal with my personality-cultist writing block, I sent him one last email asking if he particularly _cared_ if I published a couple blog posts that said some negative things about him. If he actually _cared_ about potential reputational damage to him from my writing things that I thought I had a legitimate interest in writing about, I would be _willing_ to let him pre-read the drafts before publishing and give him the chance to object to anything he thought was unfair ... but I'd rather agree that that wasn't necessary. I explained the privacy norms that I intended to follow—that I could explain _my_ actions, but had to Glomarize about the content of any private conversations that may or may not have occurred.
+
+It had taken me a while (with apologies for my atrocious [sample efficiency](https://ai.stackexchange.com/a/5247)), but I was finally ready to give up on him; I thought the efficient outcome was that I should just tell my Whole Dumb Story on my blog and never bother him again. Since he probably _didn't_ particularly care (because it's not AGI alignment and therefore unimportant) and it would be psychologically easier on me if I knew he didn't hold it against me, could I please have his advance blessing to just write and publish what I was thinking so I can get it all out of my system and move on with my life?
+
+If it helped—as far as _I_ could tell, I was only doing what _he_ taught me to do in 2007–2009: [carve reality at the joints](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), [speak the truth even if your voice trembles](https://www.lesswrong.com/posts/pZSpbxPrftSndTdSf/honesty-beyond-internal-truth), and [make an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) when you've got [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect) (Subject: "blessing to speak freely, and privacy norms?").
+
+I can't say whether he replied (because if he did, that would be covered by the privacy norm), but I think sending the email helped me. Although maybe I was wrong to ask if he wouldn't hold it against me. If you read the text of this memoir, I'm clearly holding things against _him_. If he's not my caliph anymore (with the asymmetrical duties between ruler and subject, the higher to protect and the lower to serve), and I'm entitled to my feelings, isn't he entitled to his?
+
+In February 2022, I finally managed to finish a draft of ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/) (A year after the post it replies to! I did other things that year, probably.) It's long (12,000 words), because I wanted to be thorough and cover all the angles. (To paraphrase Ralph Waldo Emerson, when you strike at Eliezer Yudkowsky, _you must kill him._)
+
+If I had to compress it by a factor of 200 (down to 60 words), I'd say my main point was that, given a conflict over pronoun conventions, there's no "right answer", but we can at least be objective in _describing what the conflict is about_, and Yudkowsky wasn't doing that; his "simplest and best proposal" favored the interests of some parties to the dispute (as was seemingly inevitable), _without admitting he was doing so_ (which was not inevitable).[^describing-the-conflict]
+
+[^describing-the-conflict]: I had been making this point for four years. [As I wrote in February 2018's "The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/#describing-the-conflict), "If different political factions are engaged in conflict over how to define the extension of some common word [...] rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in _describing the conflict_."
+
+In addition to prosecuting the object level (about pronouns) and the meta level (about acknowleding the conflict) for 12,000 words, I had also written _another_ several thousand words at the meta-meta level, about the political context of the argument and Yudkowsky's comments about what is "sometimes personally prudent and not community-harmful", but I wasn't sure whether to include it in the post itself, or post it as a separate comment on the _Less Wrong_ linkpost mirror, or save it for the memoir. I was worried about it being too "aggressive", attacking Yudkowsky too much, disregarding our usual norms about only attacking arguments and not people. I wasn't sure how to be aggressive and explain _why_ I wanted to disregard the usual norms in this case (why it was _right_ to disregard the usual norms in this case) without the Whole Dumb Story of the previous six years leaking in (which would take even longer to write).
+
+I asked "Riley" for political advice. I thought my argumens were very strong, but that the object-level argument about pronoun conventions just wasn't very interesting; what I _actually_ wanted people to see was the thing where the Big Yud of the current year _just can't stop lying for political convenience_. How could I possibly pull that off in a way that the median _Less Wrong_-er would hear? Was it a good idea to "go for the throat" with the "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth in this domain" line?
+
+"Riley" said the post was super long and boring. ("Yes. I'm bored, too," I replied.) They said that I was optimizing for my having said the thing, rather than for the reader being able to hear it. In the post, I had complained that you can't have it both ways: either pronouns convey sex-category information (in which case, people who want to use natal-sex categories have an interest in defending their right to misgender), or they don't (in which case, there would be no reason for trans people to care about what pronouns people use for them). But by burying the thing I actually wanted people to see in thousands of words of boring argumentation, I was evading the fact that _I_ couldn't have it both ways: either I was calling out Yudkowsky as betraying his principles and being dishonest, or I wasn't.
+
+"[I]f you want to say the thing, say it," concluded "Riley". "I don't know what you're afraid of."
+
+I was afraid of taking irrevocable war actions against the person who taught me everything I know. (And his apparent conviction that the world was ending _soon_, made it worse. Wouldn't it feel petty, if the last thing you ever said to your grandfather was calling him a liar in front of the whole family, even if he had in fact lied?)
+
+I wanted to believe that if I wrote all the words dotting every possible _i_ and crossing every possible _t_ at all three levels of meta, then that would make it [a description and not an attack](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/)—that I could have it both ways if I explained the lower level of organization beneath the high-level abstractions of "betraying his principles and being dishonest." If that didn't work because [I only had five words](https://www.lesswrong.com/posts/4ZvJab25tDebB8FGE/you-have-about-five-words), then—I didn't know what I'd do. I'd think about it.
+
+After a month of dawdling, I eventually decided to pull the trigger on publishing "Challenges", without the extended political coda.[^coda] The post was a little bit mean to Yudkowsky, but not so mean that I was scared of the social consequences of pulling the trigger. (Yudkowsky had been mean to Christiano and Richard Ngo and Rohin Shah in [the recent MIRI dialogues](https://www.lesswrong.com/s/n945eovrA3oDueqtq); I didn't think this was worse than that.)
+
+[^coda]: The text from the draft coda would later be incorporated into the present memoir.
+
+I cut the words "in this domain" from the go-for-the-throat concluding sentence that I had been worried about. "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth," full stop.
+
+The post was a _critical success_ by my accounting, due to eliciting a [a highly-upvoted (110 karma at press time) comment by _Less Wrong_ administrator Oliver Habryka](https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY) on the _Less Wrong_ mirror. Habryka wrote:
+
+> [...] basically everything in this post strikes me as "obviously true" and I had a very similar reaction to what the OP says now, when I first encountered the Eliezer Facebook post that this post is responding to.
+>
+> And I do think that response mattered for my relationship to the rationality community. I did really feel like at the time that Eliezer was trying to make my map of the world worse, and it shifted my epistemic risk assessment of being part of the community from "I feel pretty confident in trusting my community leadership to maintain epistemic coherence in the presence of adversarial epistemic forces" to "well, I sure have to at least do a lot of straussian reading if I want to understand what people actually believe, and should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false in order to not have to deal with complicated political issues".
+>
+> I do think that was the right update to make, and was overdetermined for many different reasons, though it still deeply saddens me.
+
+Brutal! Recall that Yudkowsky's justification for his behavior had been that "it is sometimes personally prudent and _not community-harmful_ to post your agreement with Stalin" (emphasis mine), and here we had the administrator of Yudkowsky's _own website_ saying that he's deeply saddened that he now expects Yudkowsky to _make up sophisticated stories for why pretty obviously true things are false_ (!!).
+
+Is that ... _not_ evidence of harm to the community? If that's not community-harmful in Yudkowsky's view, then what would be example of something that _would_ be? _Reply, motherfucker!_
+
+... or rather, "Reply, motherfucker", is what I fantasized about being able to say, if I hadn't already expressed an intention not to bother him anymore.
+
+------
+
+On 1 April 2022, Yudkowsky published ["MIRI Announces New 'Death With Dignity' Strategy"](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy), a cry of despair in the guise of an April Fool's Day post. MIRI didn't know how to align a superintelligence, no one else did either, but AI capabilities work was continuing apace. With no credible plan to avert almost-certain doom, the most we could do now was to strive to give the human race a more dignified death, as measured in log-odds of survival: an alignment effort that doubled the probability of a valuable future from 0.0001 to 0.0002 was worth one information-theoretic bit of dignity.
+
+In a way, "Death With Dignity" isn't really an update. Yudkowsky had always refused to name a "win" probability, while maintaining that Friendly AI was ["impossible"](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible). Now, he says the probability is approximately zero.
+
+Paul Christiano, who has a much more optimistic picture of humanity's chances, nevertheless said that he liked the "dignity" heuristic. I like it, too. It—takes some of the pressure off. I [made an analogy](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=R59aLxyj3rvjBLbHg): your plane crashed in the ocean. To survive, you must swim to shore. You know that the shore is west, but you don't know how far. The optimist thinks the shore is just over the horizon; we only need to swim a few miles and we'll probably make it. The pessimist thinks the shore is a thousand miles away and we will surely die. But the optimist and pessimist can both agree on how far we've swum up to this point, and that the most dignified course of action is "Swim west as far as you can."
+
+-----
+
+
+[TODO: bridge—link to pulled-out standalone post, "On the Public Anti-Epistemology of dath ilan"]