-
-* I think it would have been _more_ creepy, if I tried to convince her that I was "actually" a woman in some unspecified metaphysical sense
-
- * my comment about how I wished I could have a photograph, but that it would be rude to ask; she said "No", and I wanted to clarify that I didn't ask, I said I wished I _could_ ask—but, you see, her culture didn't support that level of indirection; the claim that I wasn't asking, would seem dishonest
-
-> 6. Do not ask for additional pictures, selfies or services they have not already agreed upon.
-
-]
-
-[ TODO— New York
- * I made $60 babysitting Zvi Mowshowitz's kids.
- * met my NRx Twitter mutual, wore my Quillette shirt
- * he had been banned from Slate Star Codex "for no reason"
- * he offered to buy me a drink, I said I didn't drink, but he insisted that being drunk was the ritual for how men establish trust, so I had a glass and a half of wine
- * it was so refreshing—not being constrained
- * I explained the AI risk case; he mentioned black people having larger wingspan
-
- * met Ben and his new girlfriend; Jessica wasn't around; he said the psych disaster was a betrayal, but a finite one; Ben's suggestion that if CfAR were serious, they'd hire me
-
-]
-
-------
-
-In October 2021, Jessica Taylor [published a post about her experiences at MIRI](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe), making analogies between sketchy social pressures she had experienced in the core rationalist community (around short AI timelines, secrecy, deference to community leaders, _&c._) and those reported in [Zoe Cramer's recent account of her time at Leverage Research](https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b).
-
-Scott Alexander posted a comment claiming to add important context, essentially blaming Jessica's problems on her association with Michael Vassar, to the point of describing her psychotic episode as a "Vassar-related phenomenon" (!). Alexander accused Vassar of trying "jailbreak" people from normal social reality, which "involve[d] making them paranoid about MIRI/CFAR and convincing them to take lots of drugs". Yudkowsky posted [a comment that uncritically validated Scott's reliability as a narrator](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=x5ajGhggHky9Moyr8).
-
-To me, this looked like raw factional conflict: Jessica had some negative-valence things to say about the Caliphate, so Caliphate leaders moved in to discredit her by association. Quite effectively, as it turned out: the karma score on Jessica's post dropped by more than half, while Alexander's comment got voted up to more than 380 karma. (The fact that Scott said ["it's fair for the community to try to defend itself"](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=qsEMmdo6DKscvBvDr) in ensuing back-and-forth suggests that he also saw the conversation as an adversarial one, even if he thought Jessica shot first.)
-
-I explained [why I thought Scott was being unfair](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=GzqsWxEp8uLcZinTy) (and [offered textual evidence](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=yKo2uuCcwJxbwwyBw) against the silly claim that Michael was _trying_ to drive Jessica crazy).
-
-Scott [disagreed](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=XpEpzvHPLkCH7W7jS) that joining the "Vassarites"[^vassarite-scare-quotes] wasn't harmful to me. He revealed that during my March 2019 problems, he had emailed my posse:
-
-> accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick.
-
-[^vassarite-scare-quotes]: Scare quotes because "Vassarite" seems to be Alexander's coinage; we didn't call ourselves that.
-
-But I will _absolutely_ bite the bullet on it being correct to freak out about learning your entire Society is corrupt and gaslighting (as I explained to Scott in an asynchronous 22–27 October conversation on Discord).
-
-Imagine living in the Society of Alexander's ["Kolmogorov Complicity and the Parable of Lightning"](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) (which I keep linking) in the brief period when the lightening taboo is being established, trying to make sense of everyone you know suddenly deciding, seemingly in lockstep, that thunder comes before lightning. (When you try to point out that this isn't true and no one believed it five years ago, they point out that it depends on what you mean by the word 'before'.)
-
-Eventually, you would get used to it, but at first, I think this would be legitimately pretty upsetting! If you were already an emotionally fragile person, it might even escalate to a psychiatric emergency through the specific mechanism "everyone I trust is inexplicably lying about lightning → stress → sleep deprivation → temporary psychosis". (That is, it's not that Society being corrupt directly causes mental ilness—that would be silly—but confronting a corrupt Society is very stressful, and that can [snowball into](https://lorienpsych.com/2020/11/11/ontology-of-psychiatric-conditions-dynamic-systems/) things like lost sleep, and sleep is [really](https://www.jneurosci.org/content/34/27/9134.short) [biologically important](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6048360/).)
-
-This is a pretty bad situation to be in—to be faced with the question, "Am _I_ crazy, or is _everyone else_ crazy?" But one thing that would make it slightly less bad is if you had a few allies, or even just _an_ ally—someone to confirm that the obvious answer, "It's not you," is, in fact, obvious.
-
-But in a world where [everyone who's anyone](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) agrees that thunder comes before lightning—including all the savvy consequentialists who realize that being someone who's anyone is an instrumentally convergent strategy for acquiring influence—anyone who would be so imprudent to take your everyone-is-lying-about-lightning concerns seriously, would have to be someone with ... a nonstandard relationship to social reality. Someone meta-savvy to the process of people wanting to be someone who's anyone. Someone who, honestly, is probably some kind of _major asshole_. Someone like—Michael Vassar!
-
-From the perspective of an outside observer playing a Kolmogorov-complicity strategy, your plight might look like "innocent person suffering from mental illness in need of treatment/management", and your ally as "bad influence who is egging the innocent person on for their own unknown but probably nefarious reasons". If that outside observer chooses to draw the category boundaries of "mental illness" appropriately, that story might even be true. So why not quit making such a fuss, and accept treatment? Why fight, if fighting comes at a personal cost? Why not submit?
-
-I had my answer. But I wasn't sure that Scott would understand.
-
-To assess whether joining the "Vassarites" had been harmful to me, one would need to answer: as compared to what? In the counterfactual where Michael vanished from the world in 2016, I think I would have been just as upset about the same things for the same reasons, but with fewer allies and fewer ideas to make sense of what was going on in my social environment.
-
-Additionally, it was really obnoxious when people had tried to use my association with Michael to try to discredit the content of what I was saying—interpreting me as Michael's pawn. Gwen, one of the "Zizians", in a blog post about her grievances against CfAR, has [a section on "Attempting to erase the agency of everyone who agrees with our position"](https://everythingtosaveit.how/case-study-cfar/#attempting-to-erase-the-agency-of-everyone-who-agrees-with-our-position), complaining about how people try to cast her and Somni and Emma as Ziz's minions, rather than acknowledging that they're separate people with their own ideas who had good reasons to work together. I empathized a lot with this. My thing, and separately Ben Hoffman's [thing about Effective Altruism](http://benjaminrosshoffman.com/drowning-children-rare/), and separately Jessica's thing in the OP, didn't really have a whole lot to do with each other, except as symptoms of "the so-called 'rationalist' community is not doing what it says on the tin" (which itself isn't a very specific diagnosis). But insofar as our separate problems did have a hypothesized common root cause, it made sense for us to talk to each other and to Michael about them.
-
-Was Michael using me, at various times? I mean, probably. But just as much, _I was using him_. Particularly with the November 2018–April 2019 thing (where I and the "Vassarite" posse kept repeatedly pestering Scott and Eliezer to clarify that categories aren't arbitrary): that was the "Vassarites" doing an _enormous_ favor for _me_ and _my_ agenda. (If Michael and crew hadn't had my back, I wouldn't have been anti-social enough to keep escalating.) And here Scott was trying to get away with claiming that _they_ were making my situation worse? That's _absurd_. Had he no shame?
-
-I _did_, I admitted, have some specific, nuanced concerns—especially since the December 2020 psychiatric disaster, with some nagging doubts beforehand—about ways in which being an inner-circle "Vassarite" might be bad for someone, but at the moment, I was focused on rebutting Scott's story, which was _silly_. A defense lawyer has an easier job than a rationalist—if the prosecution makes a terrible case, you can just destroy it, without it being your job to worry about whether your client is separately guilty of vaguely similar crimes that the incompetent prosecution can't prove.
-
-When Scott expressed concern about the group-yelling behavior that [Ziz had described in a blog comment](https://sinceriously.fyi/punching-evil/#comment-2345) and [Yudkowsky had described on Twitter](https://twitter.com/ESYudkowsky/status/1356494768960798720), I clarified that that thing was very different from what it was like to actually be friends with them. The everyone-yelling operation seemed like a new innovation (that I didn't like) that they wield as a psychological weapon only against people who they think are operating in bad faith? In the present conversation with Scott, I had been focusing on rebutting the claim that my February–April 2017 (major) and March 2019 (minor) psych problems were caused by the "Vassarites", because with regard to those _specific_ incidents, the charge was absurd and false. But, well ... my January 2021 (minor) psych problems actually _were_ the result of being on the receiving end of the everyone-yelling thing. I briefly described the December 2020 "Lenore" disaster, and in particular the part where Michael/Jessica/Jack yelled at me.
-
-[TODO: Scott post-Jessica Discord parley, cont'd]
-
-------
-
-[TODO:
-Is this the hill _he_ wants to die on? If the world is ending either way, wouldn't it be more dignified for him to die _without_ Stalin's dick in his mouth?
-
-> The Kiritsugu shrugged. "When I have no reason left to do anything, I am someone who tells the truth."
-https://www.lesswrong.com/posts/4pov2tL6SEC23wrkq/epilogue-atonement-8-8
-
- * Maybe not? If "dignity" is a term of art for log-odds of survival, maybe self-censoring to maintain influence over what big state-backed corporations are doing is "dignified" in that sense
-]
-
-At the end of the September 2021 Twitter altercation, I [said that I was upgrading my "mute" of @ESYudkowsky to a "block"](https://twitter.com/zackmdavis/status/1435468183268331525). Better to just leave, rather than continue to hang around in his mentions trying (consciously [or otherwise](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)) to pick fights, like a crazy ex-girlfriend. (["I have no underlying issues to address; I'm certifiably cute, and adorably obsessed"](https://www.youtube.com/watch?v=UMHz6FiRzS8) ...)
-
-I still had more things to say—a reply to the February 2021 post on pronoun reform, and the present memoir telling this Whole Dumb Story—but those could be written and published unilaterally. Given that we clearly weren't going to get to clarity and resolution, I didn't need to bid for any more of my ex-hero's attention and waste more of his time (valuable time, _limited_ time); I owed him that much.
-
-Leaving a personality cult is hard. As I struggled to write, I noticed that I was wasting a lot of cycles worrying about what he'd think of me, rather than saying the things I needed to say. I knew it was pathetic that my religion was so bottlenecked on _one guy_—particularly since the holy texts themselves (written by that one guy) [explicitly said not to do that](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus)—but unwinding those psychological patterns was still a challenge.
-
-An illustration of the psychological dynamics at play: on an EA Forum post about demandingness objections to longtermism, Yudkowsky [commented that](https://forum.effectivealtruism.org/posts/fStCX6RXmgxkTBe73/towards-a-weaker-longtermism?commentId=Kga3KGx6WAhkNM3qY) he was "broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative."
-
-I found the comment reassuring regarding the extent or lack thereof of my own contributions to the great common task—and that's the problem: I found the _comment_ reassuring, not the _argument_. It would make sense to be reassured by the claim (if true) that human psychology is such that I don't realistically have the option of devoting more than 25% of myself to the great common task. It does _not_ make sense to be reassured that _Eliezer Yudkowsky said he's broadly fine with it_. That's just being a personality-cultist.
-
-[TODO last email and not bothering him—
- * Although, as I struggled to write, I noticed I was wasting cycles worrying about what he'd think of me
- * January 2022, I wrote to him asking if he cared if I said negative things about him, that it would be easier if he wouldn't hold it against me, and explained my understanding of the privacy norm (Subject: "blessing to speak freely, and privacy norms?")
- * in retrospect, I was wrong to ask that. I _do_ hold it against him. And if I'm entitled to my feelings, isn't he entitled to his?
- * what is the exact scope of not bothering him? I actually had left a Facebook comment shortly after blocking him on Twitter, and his reply seemed to imply that I did have commenting privileges (yudkowsky-twitter_is_worse_for_you.png)
-]
-
-In February 2022, I finally managed to finish a draft of ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/) (A year after the post it replies to! I did other things that year, probably.) It's long (12,000 words), because I wanted to be thorough and cover all the angles. (To paraphrase Ralph Waldo Emerson, when you strike at Eliezer Yudkowsky, _you must kill him._)
-
-If I had to compress it by a factor of 200 (down to 60 words), I'd say my main point was that, given a conflict over pronoun conventions, there's no "right answer", but we can at least be objective in _describing what the conflict is about_, and Yudkowsky wasn't doing that; his "simplest and best proposal" favored the interests of some parties to the dispute (as was seemingly inevitable), _without admitting he was doing so_ (which was not inevitable).[^describing-the-conflict]
-
-[^describing-the-conflict]: I had been making this point for four years. [As I wrote in February 2018's "The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/#describing-the-conflict), "If different political factions are engaged in conflict over how to define the extension of some common word [...] rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in _describing the conflict_."
-
-In addition to prosecuting the object level (about pronouns) and the meta level (about acknowleding the conflict) for 12,000 words, I had also written _another_ several thousand words at the meta-meta level, about the political context of the argument and Yudkowsky's comments about what is "sometimes personally prudent and not community-harmful", but I wasn't sure whether to include it in the post itself, or save it for the memoir, or post it as a separate comment on the _Less Wrong_ linkpost mirror. I was worried about it being too "aggressive", attacking Yudkowsky too much, disregarding our usual norms about only attacking arguments and not people. I wasn't sure how to be aggressive and explain _why_ I wanted to disregard the usual norms in this case (why it was _right_ to disregard the usual norms in this case) without the Whole Dumb Story of the previous six years leaking in (which would take even longer to write).
-
-I asked secret posse member for political advice. I thought my argumens were very strong, but that the object-level argument about pronoun conventions just wasn't very interesting; what I _actually_ wanted people to see was the thing where the Big Yud of the current year _just can't stop lying for political convenience_. How could I possibly pull that off in a way that the median _Less Wrong_-er would hear? Was it a good idea to "go for the throat" with the "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth in this domain" line?
-
-Secret posse member said the post was super long and boring. ("Yes. I'm bored, too," I replied.) They said that I was optimizing for my having said the thing, rather than for the reader being able to hear it. In the post, I had complained that you can't have it both ways: either pronouns convey sex-category information (in which case, people who want to use natal-sex categories have an interest in defending their right to misgender), or they don't (in which case, there would be no reason for trans people to care about what pronouns people use for them). But by burying the thing I actually wanted people to see in thousands of words of boring argumentation, I was evading the fact that _I_ couldn't have it both ways: either I was calling out Yudkowsky as betraying his principles and being dishonest, or I wasn't.
-
-"[I]f you want to say the thing, say it," concluded secret posse member. "I don't know what you're afraid of."
-
-I was afraid of taking irrevocable war actions against the person who taught me everything I know. (And his apparent conviction that the world was ending _soon_, made it worse. Wouldn't it feel petty, if the last thing you ever said to your grandfather was calling him a liar in front of the whole family, even if he had in fact lied?)
-
-I wanted to believe that if I wrote all the words dotting every possible _i_ and crossing every possible _t_ at all three levels of meta, then that would make it [a description and not an attack](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/)—that I could have it both ways if I explained the lower level of organization beneath the high-level abstractions of "betraying his principles and being dishonest." If that didn't work because [I only had five words](https://www.lesswrong.com/posts/4ZvJab25tDebB8FGE/you-have-about-five-words), then—I didn't know what I'd do. I'd think about it.
-
-After a month of dawdling, I eventually decided to pull the trigger on publishing "Challenges", without the extended political coda.[^coda] The post was a little bit mean to Yudkowsky, but not so mean that I was scared of the social consequences of pulling the trigger. (Yudkowsky had been mean to Christiano and Richard Ngo and Rohin Shah in [the recent MIRI dialogues](https://www.lesswrong.com/s/n945eovrA3oDueqtq); I didn't think this was worse than that.)
-
-[^coda]: The text from the draft coda would later be incorporated into the present memoir.
-
-I cut the words "in this domain" from the go-for-the-throat concluding sentence that I had been worried about. "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth," full stop.
-
-The post was a _critical success_ by my accounting, due to eliciting a [a highly-upvoted (110 karma at press time) comment by _Less Wrong_ administrator Oliver Habryka](https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY) on the _Less Wrong_ mirror. Habryka wrote:
-
-> [...] basically everything in this post strikes me as "obviously true" and I had a very similar reaction to what the OP says now, when I first encountered the Eliezer Facebook post that this post is responding to.
->
-> And I do think that response mattered for my relationship to the rationality community. I did really feel like at the time that Eliezer was trying to make my map of the world worse, and it shifted my epistemic risk assessment of being part of the community from "I feel pretty confident in trusting my community leadership to maintain epistemic coherence in the presence of adversarial epistemic forces" to "well, I sure have to at least do a lot of straussian reading if I want to understand what people actually believe, and should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false in order to not have to deal with complicated political issues".
->
-> I do think that was the right update to make, and was overdetermined for many different reasons, though it still deeply saddens me.
-
-Brutal! Recall that Yudkowsky's justification for his behavior had been that "it is sometimes personally prudent and _not community-harmful_ to post your agreement with Stalin" (emphasis mine), and here we had the administrator of Yudkowsky's _own website_ saying that he's deeply saddened that he now expects Yudkowsky to _make up sophisticated stories for why pretty obviously true things are false_ (!!).
-
-Is that ... _not_ evidence of harm to the community? If that's not community-harmful in Yudkowsky's view, then what would be example of something that _would_ be? _Reply, motherfucker!_
-
-... or rather, "Reply, motherfucker", is what I fantasized about being able to say to Yudkowsky, if I hadn't already expressed an intention not to bother him anymore.
-
-[TODO: the Death With Dignity era April 2022
-
-"Death With Dignity" isn't really an update; he used to refuse to give a probability but that FAI was "impossible", and now he says the probability is ~0
-
-https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible
-
- * swimming to shore analogy https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy?commentId=R59aLxyj3rvjBLbHg
-
-> your plane crashed in the ocean. To survive, you must swim to shore. You know that the shore is west, but you don't know how far. The optimist thinks the shore is just over the horizon; we only need to swim a few miles and we'll almost certainly make it. The pessimist thinks the shore is a thousand miles away and we will surely die. But the optimist and pessimist can both agree on how far we've swum up to this point, and that the most dignified course of action is "Swim west as far as you can."
-
- * I've believed since Kurzweil that technology will remake the world sometime in the 21th century; it's just "the machines won't replace us, because we'll be them" doesn't seem credible
-
- * I agree that it would be nice if Earth had a plan; it would be nice if people figured out the stuff Yudkowsky did earlier; Asimov wrote about robots and psychohistory, but he still portrayed a future galaxy populated by humans, which seems so silly now
-
-/2017/Jan/from-what-ive-tasted-of-desire/
-]
-
-Meanwhile, Yudkowsky started writing fiction again, largely in the form of Glowfic (a genre of collaborative storytelling pioneered by Alicorn) featuring the world of dath ilan (capitalization _sic_). Dath ilan had originally been introduced in a [2014 April Fool's Day post](https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession), in which Yudkowsky "confessed" that the explanation for his seemingly implausible genius is that he's "actually" an ordinary person from a smarter, saner alternate version of Earth where the ideas he presented to this world as his own were common knowledge.
-
-The bulk of the dath ilan Glowfic canon was an epic titled [_Planecrash_](https://www.glowfic.com/boards/215)[^planecrash-title] coauthored with Lintamande, in which Keltham, an unusually selfish teenage boy from dath ilan, apparently dies in a freak aviation accident, and [wakes up in the world of](https://en.wikipedia.org/wiki/Isekai) Golarion, setting of the _Dungeons-&-Dragons_–alike _Pathfinder_ role-playing game. A [couple](https://www.glowfic.com/posts/4508) of [other](https://glowfic.com/posts/6263) Glowfic stories with different coauthors further flesh out the setting of dath ilan, which inspired a new worldbuilding trope, the [_medianworld_](https://www.glowfic.com/replies/1619639#reply-1619639), a setting where the average person is like the author along important dimensions.[^medianworlds]
-
-[^planecrash-title]: The title is a pun, referring to both the airplane crash leading to Keltham's death in dath ilan, and how his resurrection in Golarion collides dath ilan with [the "planes" of existence of the _Pathfinder_ universe](https://pathfinderwiki.com/wiki/Great_Beyond).
-
-[^medianworlds]: You might think that the thought experiment of imagining what someone's medianworld is like would only be interesting for people who are "weird" in our own world, thinking that our world is a medianworld for people who are normal in our world. But [in high-dimensional spaces, _most_ of the probability-mass is concentrated in a "shell" some distance around the mode](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point), because even though the per-unit-hypervolume probability _density_ is greatest at the mode, there's vastly _more_ hypervolume in the hyperspace around it. The upshot is that typical people are atypical along _some_ dimensions, so normies can play the medianworld game, too.
-
-(I asked Anna how Yudkowsky could stand the Glowfic people. She said she thought Eliezer could barely stand anyone. That makes sense, I said.)
-
-Everyone in dath ilan receives rationality training from childhood, but knowledge and training deemed psychologically hazardous to the general population is safeguarded by an order of [Keepers of Highly Unpleasant Things it is Sometimes Necessary to Know](https://www.glowfic.com/replies/1612937#reply-1612937). AGI research takes place in a secret underground city; some unspecified form of social engineering steers the _hoi polloi_ away from thinking about the possibility of AI.
-
-Something that annoyed me about the portrayal of dath ilan was their incredibly casual attitude towards hiding information for some alleged greater good, seemingly without considering that [there are benefits and not just costs to people knowing things](http://benjaminrosshoffman.com/humility-argument-honesty/).
-
-You can, of course, make up a sensible [Watsonian](https://tvtropes.org/pmwiki/pmwiki.php/Main/WatsonianVersusDoylist) rationale for this. A world with much smarter people is more "volatile"; with more ways for criminals and terrorists to convert knowledge into danger, maybe you _need_ more censorship just to prevent Society from blowing up.
-
-I'm more preoccupied by a [Doylistic](https://tvtropes.org/pmwiki/pmwiki.php/Main/WatsonianVersusDoylist) interpretation—that dath ilan's obsessive secret-Keeping reflects something deep about how the Yudkowsky of the current year relates to speech and information, in contrast to the Yudkowsky who wrote the Sequences. The Sequences had encouraged you—yes, _you_, the reader—to be as rational as possible. In contrast, the dath ilan mythos seems to portray advanced rationality as dangerous knowledge that people need to be protected from. ["The universe is not so dark a place that everyone needs to become a Keeper to ensure the species's survival,"](https://glowfic.com/replies/1861879#reply-1861879) we're told. "Just dark enough that some people ought to."
-
-Someone at the 2021 Event Horizon Independence Day party had told me that I had been misinterpreting the "Speak the truth, even if your voice trembles" slogan from the Sequences. I had interpreted the slogan as suggesting the importance of speaking the truth _to other people_ (which I think is what "speaking" is usually about), but my interlocutor said it was about, for example, being able to speak the truth aloud in your own bedroom, to yourself. I think some textual evidence for my interpretation can be found in Daria's ending to ["A Fable of Science and Politics"](https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of-science-and-politics):
-
-> Daria, once Green, tried to breathe amid the ashes of her world. _I will not flinch_, Daria told herself, _I will not look away_. She had been Green all her life, and now she must be Blue. Her friends, her family, would turn from her. _Speak the truth, even if your voice trembles_, her father had told her; but her father was dead now, and her mother would never understand. Daria stared down the calm blue gaze of the sky, trying to accept it, and finally her breathing quietened. _I was wrong_, she said to herself mournfully; _it's not so complicated, after all_. She would find new friends, and perhaps her family would forgive her ... or, she wondered with a tinge of hope, rise to this same test, standing underneath this same sky? "The sky is blue," Daria said experimentally, and nothing dire happened to her; but she couldn't bring herself to smile. Daria the Blue exhaled sadly, and went back into the world, wondering what she would say.
-
-Daria takes it as a given that she needs to be open about her new blue-sky belief, even though it's socially costly to herself and to her loved ones; the rationalist wisdom from her late father did _not_ say to go consult a priest or a Keeper to check whether telling everyone about the blue sky is a good idea.[^other-endings] I think this reflects the culture of the _Overcoming Bias_ in 2006 valuing the existence of a shared social reality that reflects actual reality: the conviction that it's both possible and desirable for people to rise to the same test, standing underneath the same sky.
-
-[^other-endings]: Even Eddin's ending, which portrays Eddin as more concerned with consequences than honesty, has him "trying to think of a way to prevent this information from blowing up the world", rather than trying to think of a way to suppress the information, in contrast to how Charles, in his ending, _immediately_ comes up with the idea to block off the passageway leading to the aboveground. Daria and Eddin are clearly written as "rationalists"; the deceptive strategy only comes naturally to the non-rationalist Charles. (Although you could Watsonianly argue that Eddin is just thinking longer-term than Charles: blocking off _this_ passageway and never speaking a word of it to another soul, won't prevent someone from finding some other passage to the aboveground, eventually.)
-
-In contrast, the culture of dath ilan does not seem to particularly value people _standing under the same sky_.
-
-For example, we are told of an Ordinary Merrin Conspiracy centered around a famous medical technician with a psychological need to feel unimportant, of whom ["everybody in Civilization is coordinating to pretend around her"](https://www.glowfic.com/replies/1764946#reply-1764946) that her achievements are nothing special, which is deemed to be kindness to her. It's like a reverse [Emperor Norton](https://en.wikipedia.org/wiki/Emperor_Norton) situation. (Norton was ordinary, but everyone around him colluded to make him think he was special; Merrin is special, but everyone around her colludes to make her think she's ordinary.)
-
-But _as_ a rationalist, I condemn the Ordinary Merrin Conspiracy as _morally wrong_, for the same [reasons I condemn the Emperor Norton Conspiracy](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/#emperor-norton). As [it was taught to me on _Overcoming Bias_ back in the 'aughts](https://www.lesswrong.com/posts/HYWhKXRsMAyvRKRYz/you-can-face-reality): what's true is already so. Denying it won't make it better. Acknowledging it won't make it worse. And _because_ it is true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, _because they are already doing so_.
-
-In [the story about how Merrin came to the attention of dath ilan's bureau of Exception Handling](https://glowfic.com/posts/6263), we see the thoughts of a Keeper, Rittaen, who talks to Merrin. We're told that the discipline of modeling people mechanistically rather than [through empathy](https://www.lesswrong.com/posts/NLMo5FZWFFq652MNe/sympathetic-minds) is restricted to Keepers to prevent the risk of ["turning into an exceptionally dangerous psychopath"](https://glowfic.com/replies/1862201#reply-1862201). Rittaen [uses his person-as-machine Sight](https://glowfic.com/replies/1862204#reply-1862204) to infer that Merrin was biologically predisposed to learn to be afraid of having too much status.
-
-Notwithstanding that Rittaen can be Watsonianly assumed to have detailed neuroscience skills that the author Doylistically doesn't know how to write, I am entirely unimpressed by the assertion that this idea is somehow _dangerous_, a secret that only Keepers can bear, rather than something _Merrin herself should be clued into_. "It's not [Rittaen's] place to meddle just because he knows Merrin better than Merrin does," we're told.
-
-In the same story, an agent from Exception Handling [tells Merrin that the bureau's Fake Conspiracy section is running an operation to plant evidence that Sparashki (the fictional alien Merrin happens to be dressed up as) are real](https://glowfic.com/replies/1860952#reply-1860952), and asks Merrin not to contradict this, and Merrin just ... goes along with it. It's in-character for Merrin to go along with it, because she's a pushover. My question is, why is it okay that Exception Handling has a Fake Conspiracies section, any more than it would have been if FTX or Enron explicitly had a Fake Accounting department? (Because dath ilan are the designated good guys? Well, so was FTX.)
-
-As another notable example of dath ilan hiding information for the alleged greater good, in Golarion, Keltham discovers that he's a sexual sadist, and deduces that Civilization has deliberately prevented him from realizing this, because there aren't enough corresponding masochists to go around in dath ilan. Having concepts for "sadism" and "masochism" as variations in human psychology would make sadists like Keltham sad about the desirable sexual experiences they'll never get to have, so Civilization arranges for them to _not be exposed to knowledge that would make them sad, because it would make them sad_ (!!).
-
-It did not escape my notice that when "rationalist" authorities _in real life_ considered public knowledge of some paraphilia to be an infohazard (ostensibly for the benefit of people with that paraphilia), I _didn't take it lying down_.
-
-This parallel between dath ilan's sadism/masochism coverup and the autogynephilia coverup I had fought in real life, was something I was only intending to comment on in passing in the present memoir, rather than devoting any more detailed analysis to, but as I was having trouble focusing on my own writing in September 2022, I ended up posting some critical messages about dath ilan's censorship regime in the "Eliezerfic" Discord server for reader discussion of _Planecrash_, using the masochism coverup as my central example.
-
-What happens, I asked, to the occasional dath ilani free speech activists, with their eloquent manifestos arguing that Civilization would be better off coordinating on maps that reflect the territory, rather than coordinating to be a Keeper-managed zoo? (They _had_ to exist: in a medianworld centered on Yudkowsky, there are going to a few weirdos who are +2.5 standard deviations on "speak the truth, even if your voice trembles" and −2.5 standard deivations on love of clever plots; this seems less weird than negative utilitarians, who were [established to exist](https://www.glowfic.com/replies/1789623#reply-1789623).) I _assumed_ they get dealt with in the end, but there had got to be an interesting story about someone who starts out whistleblowing small lies (which Exception Handling allows; they think it's cute, and it's "priced in" to the game they're playing), and then just keeps _escalating and escalating and escalating_ until Governance decides to unperson him.
-
-Although Yudkowsky participated in the server, I had reasoned that my participation didn't violate my previous intent not to bother him anymore, because it was a publicly-linked Discord server with hundreds of members. Me criticizing the story for the benefit of the _other_ 499 people in the chat room wouldn't generate a notification _for him_, the way it would if I sent him an email or replied to him on Twitter.
-
-In the #dath-ilan channel of the server, Yudkowsky elaborated on the reasoning for the masochism coverup:
-
-> altruistic sadists would if-counterfactually-fully-informed prefer not to know, because Civilization is capped on the number of happy sadists. even if you can afford a masochist, which requires being very rich, you're buying them away from the next sadist to whom masochists were previously just barely affordable
-
-In response to a question about how frequent sadism is among Keepers, Yudkowsky wrote:
-
-> I think they're unusually likely to be aware, nonpracticing potential sexual sadists. Noticing that sort of thing about yourself, and then not bidding against the next sadist over for the limited masochist supply, and instead just operating your brain so that it doesn't hurt much to know what you can't have, is exactly the kind of cost you're volunteering to take on when you say you wanna be a Keeper.
-> that's archetypally exactly The Sort Of Thing Keepers Do And Are
-
-> They choose not to, not just out of consideration for the next person in line, but because not harming the next person in line is part of the explicit bargain of becoming a Keeper.
-> Like, this sort of thing is exactly what you're signing up for when you throw yourself on the bounded rationality grenade.
-> Let the truth destroy what it can—but in you, not in other people.
-
-I objected (to the room, I told myself, not technically violating my prior intent to not bother Yudkowsky himself anymore) that "Let the truth destroy what it can—in yourself, not in other people" is such an _incredibly_ infantilizing philosophy. It's a meme that optimizes for shaping people (I know, _other_ people) into becoming weak, stupid, and unreflective, like Thellim's impression of Jane Austen characters. I expect people on Earth—not even "rationalists", just ordinary adults—to be able to cope with ... learning facts about psychology that imply that there are desirable sexual experiences they won't get to have.
-
-A user called Numendil insightfully pointed out that dath ilani might be skeptical of an Earthling saying that an unpleasant aspect our of existence is actually fine, for the same reason we would be skeptical of a resident of Golarion saying that; it makes sense for people from richer civilizations to look "spoiled" to people from poorer ones.
-
-Other replies were more disturbing. One participant wrote:
-
-> I think of "not in other people" not as "infantilizing", but as recognizing independent agency. You don't get to do harm to other people without their consent, whether that is physical or pychological.
-
-I pointed out that this obviously applies to, say, religion. Was it wrong to advocate for atheism in a religious Society, where robbing someone of their belief in God might be harming them?
-
-"Every society strikes a balance between protectionism and liberty," someone said. "This isn't news."
-
-It's not news about _humans_, I conceded. It was just—I thought people who were fans of Yudkowsky's writing in 2008 had a reasonable expectation that the dominant messaging in the local subculture would continue in 2022 to be _in favor_ of telling the truth and _against_ benevolently intended Noble Lies. It ... would be interesting to know why that changed.
-
-Someone else said:
-
-> dath ilan is essentially a paradise world. In a paradise world, people have the slack to make microoptimisations like that, to allow themselves Noble Lies and not fear for what could be hiding in the gaps. Telling the truth is a heuristic for this world where Noble Lies are often less Noble than expected and trust is harder to come by.
-
-I said that I thought people were missing this idea that the reason "truth is better than lies; knowledge is better than ignorance" is such a well-performing injunction in the real world (despite the fact that there's no law of physics preventing lies and ignorance from having beneficial consequences), is because it protects against unknown unknowns. Of course an author who wants to portray an ignorance-maintaining conspiracy as being for the greater good, can assert by authorial fiat whatever details are needed to make it all turn out for the greater good, but _that's not how anything works in real life_.
-
-I started a new thread to complain about the attitude I was seeing (Subject: "Noble Secrets; Or, Conflict Theory of Optimization on Shared Maps"). When fiction in this world, _where I live_, glorifies Noble Lies, that's a cultural force optimizing for making shared maps less accurate, I explained. As someone trying to make shared maps _more_ accurate, this force was hostile to me and mine. I understood that secrets and lies are different, but if you're a consequentialist thinking in terms of what kinds of optimization pressures are being applied to shared maps, it's the same issue: I'm trying to steer _towards_ states of the world where people know things, and the Keepers of Noble Secrets are trying to steer _away_ from states of the world where people know things. That's a conflict. I was happy to accept Pareto-improving deals to make the conflict less destructive, but I wasn't going to pretend the pro-ignorance forces were my friends just because they self-identify as "rationalists" or "EA"s. I was willing to accept secrets around nuclear or biological weapons, or AGI, on "better ignorant than dead" grounds, but the "protect sadists from being sad" thing was _just_ coddling people who can't handle the truth, which made _my_ life worse.
-
-I wasn't buying the excuse that secret-Keeping practices that wouldn't be OK on Earth were somehow OK on dath ilan, which was asserted by authorial fiat to be sane and smart and benevolent enough to make it work. Or if I couldn't argue with authorial fiat: the reasons why it would be bad on Earth (even if it wouldn't be bad on dath ilan) are reasons why _fiction about dath ilan is bad for Earth_.
-
-And just—back in the 'aughts, Robin Hanson had this really great blog called _Overcoming Bias_. (You probably haven't heard of it, I said.) I wanted that _vibe_ back, of Robin Hanson's blog in 2008—the will to _just get the right answer_, without all this galaxy-brained hand-wringing about who the right answer might hurt.
-
-I would have expected a subculture descended from the memetic legacy of Robin Hanson's blog in 2008 to respond to that tripe about protecting people from being destroyed by the truth as a form of "recognizing independent agency" with something like—
-
-"Hi! You must be new here! Regarding your concern about truth doing harm to people, a standard reply is articulated in the post ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased). Regarding your concern about recognizing independent agency, a standard reply is articulated in the post ["Your Rationality Is My Business"](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business)."
-
-—or _something like that_. Not that the reply needed to use those particular Sequences links, or _any_ Sequences links; what's important is that someone needs counter to this very obvious [anti-epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology).
-
-And what we actually saw in response to the "You don't get to do harm to other people" message was ... it got 5 "+1" emoji-reactions.
-
-Yudkowsky [chimed in to point out that](/images/yudkowsky-it_doesnt_say_tell_other_people.png) "Doublethink" was about _oneself_ not reasonably being in the epistemic position of knowing that one should lie to oneself. It wasn't about telling the truth to _other_ people.
-
-On the one hand, fair enough. My generalization from "you shouldn't want to have false beliefs for your own benefit" to "you shouldn't want other people to have false beliefs for their own benefit" (and the further generalization to it being OK to intervene) was not in the text of the post itself. It made sense for Yudkowsky to refute my misinterpretation of the text he wrote.
-
-On the other hand—given that Yudkowsky was paying attention to this #overflow thread anyway, I might have naïvely hoped that he would appreciate what I was trying to do?—that, after the issue had been pointed out, he would decided that he _wanted_ his chatroom to be a place where we don't want other people to have false beliefs for their own benefit?—a place that approves of "meddling" in the form of _telling people things_.
-
-The other participants mostly weren't buying what I was selling.
-
-A user called April wrote that "the standard dath ilani has internalized almost everything in the sequences": "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished". (This received a checkmark emoji-react from Yudkowsky, an indication of his agreement.)
-
-Someone else said he was "pretty leery of 'ignore whether models are painful' as a principle, for Earth humans to try to adopt," and went on to offer some thoughts for Earth. I continued to think it was ridiculous that we were talking of "Earth humans" as if there were any other kind—as if rationality in the Yudkowskian tradition wasn't something to aspire to in real life.
-
-Dath ilan [is _fiction_](https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional), I pointed out. Dath ilan _does not exist_. It was a horrible distraction to try to see our world through Thellim's eyes and feel contempt over how much better things must be on dath ilan (which, to be clear, again, _does not exist_), when one could be looking through the eyes of an ordinary reader of Robin Hanson's blog in 2008 (the _real_ 2008, which _actually happened_), and seeing everything we've lost.
-
-[As it was taught to me then](https://www.lesswrong.com/posts/iiWiHgtQekWNnmE6Q/if-you-demand-magic-magic-won-t-help): if you demand Keepers, _Keepers won't help_. If I'm going to be happy anywhere, or achieve greatness anywhere, or learn true secrets anywhere, or save the world anywhere, or feel strongly anywhere, or help people anywhere—I may as well do it _on Earth_.
-
-The thread died out soon enough. I had some more thoughts about dath ilan's predilection for deception, of which I typed up some notes for maybe adapting into a blog post later, but there was no point in wasting any more time on Discord.
-
-On 29 November 2022 (four years and a day after the "hill of meaning in defense of validity" Twitter performance that had ignited my rationalist civil war), Yudkowsky remarked about the sadism coverup again:
-
-> Keltham is a romantically obligate sadist. This is information that could've made him much happier if masochists had existed in sufficient supply; Civilization has no other obvious-to-me-or-Keltham reason to conceal it from him.
-
-Despite the fact that there was no point in wasting any more time on Discord, I decided not to resist the temptation to open up the thread again and dump some paragraphs from my notes on the conspiracies of dath ilan.
-
-[TODO: explain my sneakiness theory, shove the anti-semitism into a footnote]
-
-A user called ajvermillion asked why I was being so aggressively negative about dath ilan. He compared it to Keltham's speech about how [people who grew up under a Lawful Evil government were disposed to take a more negative view of paternalism](https://www.glowfic.com/replies/1874754#reply-1874754) than they do in dath ilan, where paternalism works fine because dath ilan is basically benevolent.
-
-[TODO: regrets and wasted time
- * Do I have regrets about this Whole Dumb Story? A lot, surely—it's been a lot of wasted time. But it's also hard to say what I should have done differently; I could have listened to Ben more and lost faith Yudkowsky earlier, but he had earned a lot of benefit of the doubt?