- * But he does, in fact, seem to actively encourage this conflation (contrast to how the Sequences had a [Litany Against Gurus](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus) these days, with the way he sneers as Earthlings and post-rats)
-
- * "I may as well do it on Earth"
-
- * a specific example that made me very angry in September 2021—
-
-https://twitter.com/ESYudkowsky/status/1434906470248636419
-> Anyways, Scott, this is just the usual division of labor in our caliphate: we're both always right, but you cater to the crowd that wants to hear it from somebody too modest to admit that, and I cater to the crowd that wants somebody out of that closet.
-
-Okay, I get that it was meant as humorous exaggeration. But I think it still has the effect of discouraging people from criticizing Scott or Eliezer because they're the leaders of the Caliphate. I spent three and a half years of my life explaining in exhaustive, exhaustive detail, with math, how Scott was wrong about something, no one serious actually disagrees, and Eliezer is still using his social power to boost Scott's right-about-everything (!!) reputation. That seems really unfair, in a way that isn't dulled by "it was just a joke."
-
-Or [as Yudkowsky put it](https://www.facebook.com/yudkowsky/posts/10154981483669228)—
-
-> I know that it's a bad sign to worry about which jokes other people find funny. But you can laugh at jokes about Jews arguing with each other, and laugh at jokes about Jews secretly being in charge of the world, and not laugh at jokes about Jews cheating their customers. Jokes do reveal conceptual links and some conceptual links are more problematic than others.
-
-It's totally understandable to not want to get involved in a political scuffle because xrisk reduction is astronomically more important! But I don't see any plausible case that metaphorically sucking Scott's dick in public reduces xrisk. It would be so easy to just not engage in this kind of cartel behavior!
-
-An analogy: racist jokes are also just jokes. Alice says, "What's the difference between a black dad and a boomerang? A boomerang comes back." Bob says, "That's super racist! Tons of African-American fathers are devoted parents!!" Alice says, "Chill out, it was just a joke." In a way, Alice is right. It was just a joke; no sane person could think that Alice was literally claiming that all black men are deadbeat dads. But, the joke only makes sense in the first place in context of a culture where the black-father-abandonment stereotype is operative. If you thought the stereotype was false, or if you were worried about it being a self-fulfilling prophecy, you would find it tempting to be a humorless scold and get angry at the joke-teller.
-
-Similarly, the "Caliphate" humor _only makes sense in the first place_ in the context of a celebrity culture where deferring to Yudkowsky and Alexander is expected behavior. (In a way that deferring to Julia Galef or John S. Wentworth is not expected behavior, even if Galef and Wentworth also have a track record as good thinkers.) I think this culture is bad. _Nullius in verba_.
-
- * the fact that David Xu interpreted criticism of the robot cult as me going "full post-rat" suggests that Yudkowsky's framing had spilled onto others. (The framing is optimized to delegitimize dissent. Motte: someone who's critical of central rationalists; bailey: someone who's moved beyond reason.)
-
-sneering at post-rats; David Xu interprets criticism of Eliezer as me going "full post-rat"?! 6 September 2021
-
-> Also: speaking as someone who's read and enjoyed your LW content, I do hope this isn't a sign that you're going full post-rat. It was bad enough when QC did it (though to his credit QC still has pretty decent Twitter takes, unlike most post-rats).
-
-https://twitter.com/davidxu90/status/1435106339550740482
-
-https://twitter.com/zackmdavis/status/1435856644076830721
-> The error in "Not Man for the Categories" is not subtle! After the issue had been brought to your attention, I think you should have been able to condemn it: "Scott's wrong; you can't redefine concepts in order to make people happy; that's retarded." It really is that simple! 4/6
-
-I once wrote [a post whimsically suggesting that trans women should owe cis women royalties](/2019/Dec/comp/) for copying the female form (as "intellectual property"). In response to a reader who got offended, I [ended up adding](/source?p=Ultimately_Untrue_Thought.git;a=commitdiff;h=03468d274f5) an "epistemic status" line to clarify that it was not a serious proposal.
-
-But if knowing it was a joke partially mollifies the offended reader who thought I might have been serious, I don't think they should be _completely_ mollified, because the joke (while a joke) reflects something about my thinking when I'm being serious: I don't think sex-based collective rights are inherently a suspect idea; I think _something of value has been lost_ when women who want female-only spaces can't have them, and the joke reflects the conceptual link between the idea that something of value has been lost, and the idea that people who have lost something of value are entitled to compensation.
-
-At Valinor's 2022 [Smallpox Eradication Day](https://twitter.com/KelseyTuoc/status/1391248651167494146) party, I remember overhearing[^overhearing] Yudkowsky saying that OpenAI should have used GPT-3 to mass-promote the Moderna COVID-19 vaccine to Republicans and the Pfizer vaccine to Democrats (or vice versa), thereby harnessing the forces of tribalism in the service of public health.
-
-[^overhearing]: I claim that conversations at a party with lots of people are not protected by privacy norms; if I heard it, several other people heard it; no one had a reasonable expectation that I shouldn't blog about it.
-
-I assume this was not a serious proposal. Knowing it was a joke partially mollifies what offense I would have taken if I thought he might have been serious. But I don't think I should be completely mollified, because I think I think the joke (while a joke) reflects something about Yudkowsky's thinking when he's being serious: that he apparently doesn't think corupting Society's shared maps for utilitarian ends is inherently a suspect idea; he doesn't think truthseeking public discourse is a thing in our world, and the joke reflects the conceptual link between the idea that public discourse isn't a thing, and the idea that a public that can't reason needs to be manipulated by elites into doing good things rather than bad things.
-
-My favorite Ben Hoffman post is ["The Humility Argument for Honesty"](http://benjaminrosshoffman.com/humility-argument-honesty/). It's sometimes argued the main reason to be honest is in order to be trusted by others. (As it is written, ["[o]nce someone is known to be a liar, you might as well listen to the whistling of the wind."](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings).) Hoffman points out another reason: we should be honest because others will make better decisions if we give them the best information available, rather than worse information that we chose to present in order to manipulate their behavior. If you want your doctor to prescribe you a particular medication, you might be able to arrange that by looking up the symptoms of an appropriate ailment on WebMD, and reporting those to the doctor. But if you report your _actual_ symptoms, the doctor can combine that information with their own expertise to recommend a better treatment.
-
-If you _just_ want the public to get vaccinated, I can believe that the Pfizer/Democrats _vs._ Moderna/Republicans propaganda gambit would work. You could even do it without telling any explicit lies, by selectively citing the either the protection or side-effect statistics for each vaccine depending on whom you were talking to. One might ask: if you're not _lying_, what's the problem?
-
-The _problem_ is that manipulating people into doing what you want subject to the genre constraint of not telling any explicit lies, isn't the same thing as informing people so that they can make sensible decisions. In reality, both mRNA vaccines are very similar! It would be surprising if the one associated with my political faction happened to be good, whereas the one associated with the other faction happened to be bad. Someone who tried to convince me that Pfizer was good and Moderna was bad would be misinforming me—trying to trap me in a false reality, a world that doesn't quite make sense—with [unforseeable consequences](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) for the rest of my decisionmaking. As someone with an interest in living in a world that makes sense, I have reason to regard this as _hostile action_, even if the false reality and the true reality both recommend the isolated point decision of getting vaccinated.
-
-(The authors of the [HEXACO personality model](https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure) may have gotten something importantly right in [grouping "honesty" and "humility" as a single factor](https://en.wikipedia.org/wiki/Honesty-humility_factor_of_the_HEXACO_model_of_personality).)
-
-I'm not, overall, satisfied with the political impact of my writing on this blog. One could imagine someone who shared Yudkowsky's apparent disbelief in public reason advising me that my practice of carefully explaining at length what I believe and why, has been an ineffective strategy—that I should instead clarify to myself what policy goal I'm trying to acheive, and try to figure out some clever gambit to play trans activists and gender-critical feminists against each other in a way that advances my agenda.
-
-From my perspective, such advice would be missing the point. [I'm not trying to force though some particular policy.](/2021/Sep/i-dont-do-policy/) Rather, I think I _know some things_ about the world, things I wish I had someone had told me earlier. So I'm trying to tell others, to help them live in _a world that makes sense_.
-
-]
-
-
-[David Xu writes](https://twitter.com/davidxu90/status/1436007025545125896) (with Yudkowsky ["endors[ing] everything [Xu] just said"](https://twitter.com/ESYudkowsky/status/1436025983522381827)):
-
-> I'm curious what might count for you as a crux about this; candidate cruxes I could imagine include: whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is "rational" to rule that such inferences should be avoided when possible, and if so, whether the best way to disallow a large set of potential inferences is [to] proscribe the use of the categories that facilitate them—and if _not_, whether proscribing the use of a category in _public communication_ constitutes "proscribing" it more generally, in a way that interferes with one's ability to perform "rational" thinking in the privacy of one's own mind.
->
-> That's four possible (serial) cruxes I listed, one corresponding to each "whether".
-
-I reply: on the first and second cruxes, concerning whether some categories facilitate inferences that cause more harm than benefit on the whole and whether they should be avoided when possible, I ask: harm _to whom?_ Not all agents have the same utility function! If some people are harmed by other people making certain probabilistic inferences, then it would seem that there's a _conflict_ between the people harmed (who prefer that such inferences be avoided if possible), and people who want to make and share probabilistic inferences about reality (who think that that which can be destroyed by the truth, should be).
-
-On the third crux, whether the best way to disallow a large set of potential inferences is to proscribe the use of the categories that facilitate them: well, it's hard to be sure whether it's the _best_ way: no doubt a more powerful intelligence could search over a larger space of possible strategies than me. But yeah, if your goal is to _prevent people from noticing facts about reality_, then preventing them from using words that refer those facts seems like a pretty effective way to do it!
-
-On the fourth crux, whether proscribing the use of a category in public communication constitutes "proscribing" in a way that interferes with one's ability to think in the privacy of one's own mind: I think this is mostly true for humans. We're social animals. To the extent that we can do higher-grade cognition at all, we do it using our language faculties that are designed for communicating with others. How are you supposed to think about things that you don't have words for?
-
-Xu continues:
-
-> I could have included a fifth and final crux about whether, even _if_ The Thing In Question interfered with rational thinking, that might be worth it; but this I suspect you would not concede, and (being a rationalist) it's not something I'm willing to concede myself, so it's not a crux in a meaningful sense between us (or any two self-proclaimed "rationalists").
->
-> My sense is that you have (thus far, in the parts of the public discussion I've had the opportunity to witness) been behaving as though the _one and only crux in play_—that is, the True Source of Disagreement—has been the fifth crux, the thing I refused to include with the others of its kind. Your accusations against the caliphate _only make sense_ if you believe the dividing line between your behavior and theirs is caused by a disagreement as to whether "rational" thinking is "worth it"; as opposed to, say, what kind of prescriptions "rational" thinking entails, and which (if any) of those prescriptions are violated by using a notion of gender (in public, where you do not know in advance who will receive your communications) that does not cause massive psychological damage to some subset of people.
->
-> Perhaps it is your argument that all four of the initial cruxes I listed are false; but even if you believe that, it should be within your set of ponderable hypotheses that people might disagree with you about that, and that they might perceive the disagreement to be _about_ that, rather than (say) about whether subscribing to the Blue Tribe view of gender makes them a Bad Rationalist, but That's Okay because it's Politically Convenient.
->
-> This is the sense in which I suspect you are coming across as failing to properly Other-model.
-
-After everything I've been through over the past six years, I'm inclined to think it's not a "disagreement" at all.
-
-It's a _conflict_. I think what's actually at issue is that, at least in this domain, I want people to tell the truth, and the Caliphate wants people to not tell the truth. This isn't a disagreement about rationality, because telling the truth _isn't_ rational _if you don't want people to know things_.
-
-At this point, I imagine defenders of the Caliphate are shaking their heads in disappointment at how I'm doubling down on refusing to Other-model. But—_am_ I? Isn't this just a re-statement of Xu's first proposed crux, except reframed as a "values difference" rather than a "disagreement"?
-
-Is the problem that my use of the phrase "tell the truth" (which has positive valence in our culture) functions to sneak in normative connotations favoring "my side"?
-
-Fine. Objection sustained. I'm happy to use to Xu's language: I think what's actually at issue is that, at least in this domain, I want to facilitate people making inferences (full stop), and the Caliphate wants to _not_ facilitate people making inferences that, on the whole, cause more harm than benefit. This isn't a disagreement about rationality, because facilitating inferences _isn't_ rational _if you don't want people to make inferences_ (for example, because they cause more harm than benefit).
-
-Better? Perhaps, to some 2022-era rats and EAs, this formulation makes my position look obviously in the wrong: I'm saying that I'm fine with my inferences _causing more harm than benefit_ (!). Isn't that monstrous of me? Why would someone do that?
-
-One of the better explanations of this that I know of was (again, as usual) authored by Yudkowsky in 2007, in a post titled ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased).
-
-The Yudkowsky of 2007 starts by quoting a passage from George Orwell's _1984_, in which O'Brien (a loyal member of the ruling Party in the totalitarian state depicted in the novel) burns a photograph of Jones, Aaronson, and Rutherford (former Party leaders whose existence has been censored from the historical record). Immediately after burning the photograph, O'Brien denies that it ever existed.
-
-The Yudkowsky of 2007 continues—it's again worth quoting at length—
-
-> What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—_unhappy?_ Surely, _true_ wisdom would be _second-order_ rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness.
->
-> Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.
->
-> [...]
->
-> For second-order rationality to be genuinely _rational_, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality. If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting. I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.
->
-> You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception.
->
-> The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity.
->
-> [...]
->
-> One of chief pieces of advice I give to aspiring rationalists is "Don't try to be clever." And, "Listen to those quiet, nagging doubts." If you don't know, you don't know _what_ you don't know, you don't know how _much_ you don't know, and you don't know how much you _needed_ to know.
->
-> There is no second-order rationality. There is only a blind leap into what may or may not be a flaming lava pit. Once you _know_, it will be too late for blindness.
-
-Looking back on this from 2022, the only criticism I have is that Yudkowsky was too optimistic to "doubt such a lunatic dislocation in the mind could really happen." In some ways, people's actual behavior is _worse_ than what Orwell depicted. The Party of Orwell's _1984_ covers its tracks: O'Brien takes care to burn the photograph _before_ denying memory of it, because it would be _too_ absurd for him to act like the photo had never existed while it was still right there in front of him.
-
-In contrast, Yudkowsky's Caliphate of the current year _doesn't even bother covering its tracks_. Turns out, it doesn't need to! People just don't remember things!
-
-The [flexibility of natural language is a _huge_ help here](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). If the caliph were to _directly_ contradict himself in simple, unambiguous language—to go from "Oceania is not at war with Eastasia" to "Oceania is at war with Eastasia" without any acknowledgement that anything had changed—_then_ too many people might notice that those two sentences are the same except that one has the word _not_ in it. What's a caliph to do, if he wants to declare war on Eastasia without acknowledging or taking responsibility for the decision to do so?
-
-The solution is simple: just—use more words! Then if someone tries to argue that you've _effectively_ contradicted yourself, accuse them of being uncharitable and failing to model the Other. You can't lose! Anything can be consistent with anything if you apply a sufficiently charitable reading; whether Oceania is at war with Eastasia depends on how you choose to draw the category boundaries of "at war."
-
-Thus, O'Brien should envy Yudkowsky: burning the photograph turns out to be unnecessary! ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) is _still up_ and not retracted, but that didn't stop the Yudkowsky of 2016 from pivoting to ["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) when that became a politically favorable thing to say. I claim that these posts _effectively_ contradict each other. The former explains why men who fantasize about being women are _not only_ out of luck given forseeable technology, but _also_ that their desires may not even be coherent (!), whereas the latter claims that men who wish they were women may, in fact, _already_ be women in some unspecified psychological sense.
-
-_Technically_, these don't _strictly_ contradict each other: I can't point to a sentence from each that are the same except one includes the word _not_. (And even if there were such sentences, I wouldn't be able to prove that the other words were being used in the same sense in both sentences.) One _could_ try to argue that "Changing Emotions" is addressing cis men with a weird sex-change fantasy, whereas the "ones with penises are actually women" claim was about trans women, which are a different thing.
-
-_Realistically_ ... no. These two posts _can't_ both be right. In itself, this isn't a problem: people change their minds sometimes, which is great! But when people _actually_ change their minds (as opposed to merely changing what they say in public for political reasons), you expect them to be able to _acknowledge_ the change, and hopefully explain what new evidence or reasoning brought them around. If they can't even _acknowledge the change_, that's pretty Orwellian, like O'Brien trying to claim that the photograph is of different men who just coincidentally happen to look like Jones, Aaronson, and Rutherford.
-
-And if a little bit of Orwellianism on specific, narrow, highly-charged topics might be forgiven—because everyone else in your Society is doing it, and you would be punished for not playing along, an [inadequate equilibrium](https://equilibriabook.com/) that no one actor has the power to defy—might we not expect the father of the "rationalists" to stand his ground on the core theses of his ideology, like whether telling the truth is good?
-
-I guess not! ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased) is _still up_ and not retracted, but that didn't stop Yudkowsky from [endorsing everything Xu said](https://twitter.com/ESYudkowsky/status/1436025983522381827) about "whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is 'rational' to rule that such inferences should be avoided when possible" being different cruxes than "whether 'rational' thinking is 'worth it'".
-
-I don't doubt Yudkowsky could come up with some clever casuistry why, _technically_, the text he wrote in 2007 and the text he endorsed in 2021 don't contradict each other. But _realistically_ ... again, no.
-
-[TODO: elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading and consider, not whether individual sentences can be interpreted as "true", but what kind of _optimization_ the text is doing to the behavior of receptive readers]
-
-
-I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's _clearly labeled as such_ would be fine. (There's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive; if you want other perspectives, you'll have to read other authors and think it through for yourself.")
-
-What's _not_ fine is selective argumentation while claiming "confidence in [your] own ability to independently invent everything important that would be on the other side of the filter and check it [yourself] before speaking" when you _very obviously have done no such thing_.
-
-------
-
-In October 2021, Jessica Taylor [published a memoir about her experiences at MIRI](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe), making analogies between sketchy social pressures she had experienced in the core rationalist community (around short AI timelines, secrecy, deference to community leaders, _&c._) and those reported in [Zoe Cramer's recent account of Leverage Research](https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b).
-
-Scott Alexander posted a comment "add[ing] some context [he thought was] important to this", essentially blaming Jessica's problems on her association with Michael Vassar, describing her psychotic episode as a "Vassar-related phenomenon" (!).
-
-I thought this was unfair, and [said so](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=GzqsWxEp8uLcZinTy) (and [offered textual evidence](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=yKo2uuCcwJxbwwyBw) against the claim that Michael was _trying_ to drive Jessica crazy).
-
-To me, Scott's behavior looked like raw factional conflict: Jessica had some negative-valence things to say about the Caliphate, so Caliphate leaders move in to discredit her by association.
-
-It was effective, though. After Alexander's comment (and [a comment from Yudkowsky](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=x5ajGhggHky9Moyr8) uncritically accepting Alexander's charge of Vassar "causing psychotic breaks in people"), the karma score on Jessica's post dropped by more than half, while Alexander's comment got voted up to more than 380 karma.
-
-[TODO my conversation with Scott—
-
-> when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick.
-
- * Scott interviewed me
- * I said
-
-]
-
-In December, Jessica published [a followup post explaining the circumstances of her psychotic episode in more detail](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards).
-
-[TODO: Scott concedes: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=RGKkmyvyoeWe2LB7d ]
-
-------
-
-[TODO:
-Is this the hill _he_ wants to die on? If the world is ending either way, wouldn't it be more dignified for him to die _without_ Stalin's dick in his mouth?
-
- * Maybe not? If "dignity" is a term of art for log-odds of survival, maybe self-censoring to maintain influence over what big state-backed corporations are doing is "dignified" in that sense
-]
-
-At the end of the September 2021 Twitter altercation, I [said that I was upgrading my "mute" of @ESYudkowsky to a "block"](https://twitter.com/zackmdavis/status/1435468183268331525). Better to just leave, rather than continue to hang around in his mentions trying (consciously or otherwise) to pick fights, like a crazy ex-girlfriend. (["I have no underlying issues to address; I'm certifiably cute, and adorably obsessed"](https://www.youtube.com/watch?v=UMHz6FiRzS8) ...)
-
-I still had more things to say—a reply to the February 2021 post on pronoun reform, and the present memoir telling this Whole Dumb Story—but those could be written and published unilaterally. Given that we clearly weren't going to get to clarity and resolution, I didn't need to bid for any more of my ex-hero's attention and waste more of his time (valuable time, _limited_ time); I owed him that much.
-
-Leaving a personality cult is hard. As I struggled to write, I noticed that I was wasting a lot of cycles worrying about what he'd think of me, rather than saying the things I needed to say. I knew it was pathetic that my religion was so bottlenecked on _one guy_—particularly since the holy texts themselves (written by that one guy) [explicitly said not to do that](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus)—but unwinding those psychological patterns was still a challenge.
-
-An illustration of the psychological dynamics at play: on an EA forum post about demandingness objections to longtermism, Yudkowsky [commented that](https://forum.effectivealtruism.org/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism?commentId=Kga3KGx6WAhkNM3qY) he was "broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative."
-
-I found the comment reassuring regarding the extent or lack thereof of my own contributions to the great common task—and that's the problem: I found the _comment_ reassuring, not the _argument_. It would make sense to be reassured by the claim (if true) that human psychology is such that I don't realistically have the option of devoting more than 25% of myself to the great common task. It does _not_ make sense to be reassured that _Eliezer Yudkowsky said he's broadly fine with it_. That's just being a personality-cultist.
-
-[TODO last email and not bothering him—
- * Although, as I struggled to write, I noticed I was wasting cycles worrying about what he'd think of me
- * January 2022, I wrote to him asking if he cared if I said negative things about him, that it would be easier if he wouldn't hold it against me, and explained my understanding of the privacy norm (Subject: "blessing to speak freely, and privacy norms?")
- * in retrospect, I was wrong to ask that. I _do_ hold it against him. And if I'm entitled to my feelings, isn't he entitled to his?
- * what is the exact scope of not bothering him? I actually had left a Facebook comment shortly after blocking him on Twitter, and his reply seemed to imply that I did have commenting privileges (yudkowsky-twitter_is_worse_for_you.png)
-]
-
-In February 2022, I finally managed to finish a draft of ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/) (A year after the post it replies to! I did other things that year, probably.) It's long (12,000 words), because I wanted to be thorough and cover all the angles. (To paraphrase Ralph Waldo Emerson, when you strike at Eliezer Yudkowsky, _you must kill him._)
-
-If I had to compress it by a factor of 200 (down to 60 words), I'd say my main point was that, given a conflict over pronoun conventions, there's no "right answer", but we can at least be objective in _describing what the conflict is about_, and Yudkowsky wasn't doing that; his "simplest and best proposal" favored the interests of some parties to the dispute (as was seemingly inevitable), _without admitting he was doing so_ (which was not inevitable).[^describing-the-conflict]
-
-[^describing-the-conflict]: I had been making this point for four years. [As I wrote in February 2018's "The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/#describing-the-conflict), "If different political factions are engaged in conflict over how to define the extension of some common word [...] rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in _describing the conflict_."
-
-In addition to prosecuting the object level (about pronouns) and the meta level (about acknowleding the conflict) for 12,000 words, I had also written _another_ several thousand words at the meta-meta level, about the political context of the argument and Yudkowsky's comments about what is "sometimes personally prudent and not community-harmful", but I wasn't sure whether to include it in the post itself, or save it for the memoir, or post it as a separate comment on the _Less Wrong_ linkpost mirror. I was worried about it being too "aggressive", attacking Yudkowsky too much, disregarding our usual norms about only attacking arguments and not people. I wasn't sure how to be aggressive and explain _why_ I wanted to disregard the usual norms in this case (why it was _right_ to disregard the usual norms in this case) without the Whole Dumb Story of the previous six years leaking in (which would take even longer to write).
-
-I asked secret posse member for political advice. I thought my argumens were very strong, but that the object-level argument about pronoun conventions just wasn't very interesting; what I _actually_ wanted people to see was the thing where the Big Yud of the current year _just can't stop lying for political convenience_. How could I possibly pull that off in a way that the median _Less Wrong_-er would hear? Was it a good idea to "go for the throat" with the "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth in this domain" line?
-
-Secret posse member said the post was super long and boring. ("Yes. I'm bored, too," I replied.) They said that I was optimizing for my having said the thing, rather than for the reader being able to hear it. In the post, I had complained that you can't have it both ways: either pronouns convey sex-category information (in which case, people who want to use natal-sex categories have an interest in defending their right to misgender), or they don't (in which case, there would be no reason for trans people to care about what pronouns people use for them). But by burying the thing I actually wanted people to see in thousands of words of boring argumentation, I was evading the fact that _I_ couldn't have it both ways: either I was calling out Yudkowsky as betraying his principles and being dishonest, or I wasn't.
-
-"[I]f you want to say the thing, say it," concluded secret posse member. "I don't know what you're afraid of."
-
-I was afraid of taking irrevocable war actions against the person who taught me everything I know. (And his apparent conviction that the world was ending _soon_, made it worse. Wouldn't it feel petty, if the last thing you ever said to your grandfather was calling him a liar in front of the whole family, even if he had in fact lied?)
-
-I wanted to believe that if I wrote all the words dotting every possible _i_ and crossing every possible _t_ at all three levels of meta, then that would make it [a description and not an attack](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/)—that I could have it both ways if I explained the lower level of organization beneath the high-level abstractions of "betraying his principles and being dishonest." If that didn't work because [I only had five words](https://www.lesswrong.com/posts/4ZvJab25tDebB8FGE/you-have-about-five-words), then—I didn't know what I'd do. I'd think about it.
-
-After a month of dawdling, I eventually decided to pull the trigger on publishing "Challenges", without the extended political coda.[^coda] The post was a little bit mean to Yudkowsky, but not so mean that I was scared of the social consequences of pulling the trigger. (Yudkowsky had been mean to Christiano and Richard Ngo and Rohin Shah in [the recent MIRI dialogues](https://www.lesswrong.com/s/n945eovrA3oDueqtq); I didn't think this was worse than that.)