+> I think of "not in other people" not as "infantilizing", but as recognizing independent agency. You don't get to do harm to other people without their consent, whether that is physical or pychological.
+
+I pointed out that this obviously applies to, say, religion. Was it wrong to advocate for atheism in a religious Society, where robbing someone of their belief in God might be harming them?
+
+"Every society strikes a balance between protectionism and liberty," someone said. "This isn't news."
+
+It's not news about _humans_, I conceded. It was just—I thought people who were fans of Yudkowsky's writing in 2008 had a reasonable expectation that the dominant messaging in the local subculture would continue in 2022 to be _in favor_ of telling the truth and _against_ benevolently intended noble lies. It ... would be interesting to know why that changed.
+
+I started a new thread for my topic (Subject: "Noble Secrets; Or, Conflict Theory of Optimization on Shared Maps"). It died out after a couple days, and I reopened it later in response to more discussion of the masochism coverup.
+
+Yudkowsky made an appearance. (After he replied to someone else, I remarked parenthetically that his appearance made me think I should stop wasting time snarking in his fiction server and just finish my memoir already.) We had a brief back-and-forth:
+
+> **Eliezer** — 11/29/2022 10:33 PM
+> the main thing I'd observe contrary to Zack's take here, is that Keltham thought that not learning about masochists he can never have, was obviously in retrospect what he'd have wanted Civilization to do, or do unless and until Keltham became rich enough to afford a masochist and then he could be told
+> in other words, Keltham thought he was obviously being treated the way that counterfactual fully-informed Keltham would have paid Governance to treat not-yet-informed Keltham
+> that this obeys the social contract that Keltham thought he had, is part of why Keltham is confident that the logic of this particular explanation holds together
+> **zackmdavis** — 11/29/2022 10:35 PM
+> the level of service that Keltham is expecting is _not the thing I learned from Robin Hanson's blog in 2008_
+> **Eliezer** — 11/29/2022 10:36 PM
+> I am sorry that some of the insane people I attracted got together and made each other more insane and then extensively meta-gaslit you into believing that everyone generally and me personally was engaging in some kind of weird out-in-the-open gaslighting that you could believe in if you attached least-charitable explanations to everything we were doing
+
+It was pretty annoying that Yudkowsky was still attributing my greviances to Michael's malign influence—as if the gender identity revolution was something I would otherwise have just taken lying down. In the counterfactual where Michael had died in 2015, I think something like my February 2017 breakdown would have likely happened anyway. (Between August 2016 and January 2017, I sent Michael 14 emails, met with him once, and watched 60% of South Park season 19 at his suggestion, so he was _an_ influence on my thinking during that period, but not a disproportionately large one compared to everything else I was doing at the time.) How would I have later reacted to the November 2018 "hill of meaning" Tweets (assuming they weren't butterfly-effected away in this counterfactual)? It's hard to say. Maybe, if that world's analogue of my February 2017 breakdown had gone sufficiently badly (with no Michael to visit me in the psych ward or help me make sense of things afterwards), I would have already been a broken man, and not even sent Yudkowsky an email. In any case, I feel very confident that my understanding of the behavior of "everyone generally and [Yudkowsky] personally" would not have been _better_ without Michael _et al._'s influence.
+
+> [cont'd]
+> you may recall that this blog included something called the "Bayesian Conspiracy"
+> they won't tell you about it, because it interferes with the story they were trying to drive you insaner with, but it's so
+> **zackmdavis** — 11/29/2022 10:37 PM
+> it's true that the things I don't like about modern Yudkowsky were still there in Sequences-era Yudkowsky, but I think they've gotten _worse_
+> **Eliezer** — 11/29/2022 10:39 PM
+> well, if your story is that I was always a complicated person, and you selected some of my posts and liked the simpler message you extracted from those, and over time I've shifted in my emphases in a way you don't like, while still having posts like Meta-Honesty and so on... then that's a pretty different story than the one you were telling in this Discord channel, like, just now. today.
+
+Is it, though? The "always a complicated person [who has] shifted in [his] emphases in a way [I] don't like" story was true, of course, but it elided the substantive reasons _why_ I didn't like the new emphases, which could presumably be evaluated on their own merits.
+
+It's interesting that Yudkowsky listed "still having posts like Meta-Honesty" as an exculpatory factor here. The thing is, I [wrote a _critique_ of Meta-Honesty](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). It was well-received (being [cited as a good example in the introductory post for the 2019 Less Wrong Review](https://www.lesswrong.com/posts/QFBEjjAvT6KbaA3dY/the-lesswrong-2019-review), for instance). I don't think I could have written a similarly impassioned critique of anything from the Sequences era, because the stuff from the Sequences era still looked _correct_ to me. To me, "Meta-Honesty" was evidence _for_ Yudkowsky having relinquished his Art and lost his powers, not evidence that his powers were still intact.
+
+I didn't have that response thought through in real time. At the time, I just agreed:
+
+> **zackmdavis** — 11/29/2022 10:39 PM
+> It is!
+> I'm not done working through the hate-warp
+
+(This being a reference to part of _Planecrash_ in which [Keltham tells Carissa to be aware of her un-dath ilani tendency to feel "hatred that warps reality to be more hateable"](https://www.glowfic.com/replies/1882822#reply-1882822).)
+
+> **Eliezer** — 11/29/2022 10:40 PM
+> so one thing hasn't changed: the message that you, yourself, should always be trying to infer the true truth, off the information you already have.
+> if you know you've got a hate-warp I don't know why you're running it and not trying to correct for it
+> are you in fact also explicitly aware that the people who talk to you a lot about "gaslighting" are, like, insane?
+> **zackmdavis** — 11/29/2022 10:42 PM
+> I'm not really part of Vassar's clique anymore, if that's what you mean
+> **Eliezer** — 11/29/2022 10:44 PM
+> it looks from outside here like they stomped really heavy footprints all over your brain that have not healed or been filled in
+> **zackmdavis** — 11/29/2022 10:49 PM
+> it looks from inside here that the thing I'm not healed from is the thing where, as Oliver Habryka put it, I "should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false" ([https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY](https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY)), and Michael and Ben and Jessica were _really_ helpful for orienting me to that particular problem, even if I disagree with them about a lot of other things and they seem crazy in other ways
+> (rule thinkers in, not out)
+
+(I was pleased to get the link to Habryka's comment in front of Yudkowsky, if he hadn't already seen it.)
+
+> **Eliezer** — 11/29/2022 10:55 PM
+> the most harm they did you was to teach you to see malice where you should have seen mortality
+> noninnocent error is meaningfully different from innocent error; and noninnocent error is meaningfully different from malice
+> Keltham deduced the lack of masochists in dath ilan by asking the question, "Why would Civilization have kept this information from me?", _ruling out_ or actually not even thinking of such ridiculous hypotheses as "Because it was fun", and settling on the obvious explanation that explained _why Keltham would have wanted Civilization to do that for him_—masochists not existing or being incredibly rare and unaffordable to him. You looked at this and saw malice everywhere; you couldn't even see _the fictional world_ the author was trying to give you _direct description about_. You didn't say that you disbelieved in the world; you could not see what was being _described_.
+
+(When a literary critic proposes a "dark" interpretation of an author's world, I think it's implied that they're expressing disbelief in the "intended" world; the fact that I was impudently refusing to buy the benevolent interpretation wasn't because I didn't understand it.)
+
+> Hate-warp like this is bad for truth-perception; my understanding of the situation is that it's harm done to you by the group you say you left. I would read this as being a noninnocent error of that group; that they couldn't get what they wanted from people who still had friends outside their own small microculture, and noninnocently then decided that this outer culture was bad and people needed to be pried loose from it. They tried telling some people that this outer culture was gaslighting them and maliciously lying to them and had to be understood in wholly adversarial terms to break free of the gaslighting; that worked on somebody, and made a new friend for them; so their brain noninnocently learned that it ought to use arguments like that again, so they must be true.
+> This is a sort of thing I super did not do because I _understood_ it as a failure mode and Laid My Go Stones Against Ever Actually Being A Cult; I armed people with weapons against it, or tried to, but I was optimistic in my hopes about how much could actually be taught.
+> **zackmdavis** — 11/29/2022 11:20 PM
+> Without particularly defending Vassar _et al._ or my bad literary criticism (sorry), _modeling the adversarial component of non-innocent errors_ (as contrasted to "had to be understood in wholly adversarial terms") seems very important. (Maybe lying is "worse" than rationalizing, but if you can't hold people culpable for rationalization, you end up with a world that's bad for broadly the same reasons that a world full of liars is bad: we can't steer the world to good states if everyone's map is full of falsehoods that locally benefitted someone.)
+> **Eliezer** — 11/29/2022 11:22 PM
+> Rationalization sure is a huge thing! That's why I considered important to discourse upon the science of it, as was then known; and to warn people that there were more complicated tangles than that, which no simple experiment had shown yet.
+> **zackmdavis** — 11/29/2022 11:22 PM
+> yeah
+> **Eliezer** — 11/29/2022 11:23 PM
+> It remains something that mortals do, and if you cut off anybody who's ever done that, you'll be left with nobody. And also importantly, people making noninnocent errors, if you accuse them of malice, will look inside themselves and correctly see that this is not how they work, and they'll stop listening to the (motivated) lies you're telling them about themselves.
+> This also holds true if you make up overly simplistic stories about 'ah yes well you're doing that because you're part of $woke-concept-of-society' etc.
+> **zackmdavis** — 11/29/2022 11:24 PM
+> I think there's _also_ a frequent problem where you try to accuse people of non-innocent errors, and they motivatedly interpret _you_ as accusing malice
+> **Eliezer** — 11/29/2022 11:25 PM
+> Then invent new terminology. I do that all the time when existing terminology fails me.
+> Like I literally invented the term 'noninnocent error' right in this conversation.
+> **zackmdavis** — 11/29/2022 11:27 PM
+> I've tried this, but maybe it wasn't good enough, or I haven't been using it consistently enough: [https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)
+> I should get ready for bed
+> I will endeavor to edit out the hate-warp from my memoir before publishing, and _probably_ not talk in this server
+> **Eliezer** — 11/29/2022 11:31 PM
+> I think you should worry first about editing the hate-warp out of yourself, but editing the memoir might be useful practice for it. Good night.
+
+It turned out that I was lying about probably not talking in the server anymore. (Hedging with the word "probably" didn't make the claim true, and of course I wasn't _consciously_ lying, but that hardly seems exculpatory.)
+
+The next day, I belatedly pointed out that "Keltham thought that not learning about masochists he can never have, was obviously in retrospect what he'd have wanted Civilization to do" seemed to contradict "one thing hasn't changed: the message that you, yourself, should always be trying to infer the true truth". In the first statement, it didn't sound like Keltham thinks it's good that Civilization didn't tell him so that he could figure it out for himself (in accordance with the discipline of "you, yourself, always trying to infer the truth"). It sounded like he was better off not knowing—better off having a _less accurate self-model_ (not having the concept of "obligate romantic sadism"), better off having a _less accurate world-model_ (thinking that masochism isn't real).
+
+In response to someone positing that dath ilani were choosing to be happier but less accurate predictors, I said that I read a blog post once about why you actually didn't want to do that, linking to [an Internet Archive copy of "Doublethink (Choosing to Be Biased)"](https://web.archive.org/web/20080216204229/https://www.overcomingbias.com/2007/09/doublethink-cho.html) from 2008[^hanson-conceit]—at least, that was _my_ attempted paraphrase; it was possible that I'd extracted a simpler message from it than the author intended.
+
+[^hanson-conceit]: I was really enjoying the "Robin Hanson's blog in 2008" conceit.
+
+A user called Harmless explained the loophole. "Doublethink" was pointing out that decisions that optimize the world for your preferences can't come from nowhere: if you avoid painful thoughts in your map, you damage your ability to steer away from painful outcomes in the territory. However, there was no rule that all the information-processing going into decisions that optimize the world for your preferences had to take place in _your brain_ ...
+
+I saw where they were going and completed the thought: you could build a Friendly AI or a Civilization to see all the dirty things for you, that would make you unhappy to have to see yourself.
+
+Yudkowsky clarified his position:
+
+> My exact word choices often do matter: I said that you should always be trying to _infer_ the truth. With the info you already have. In dath ilan if not in Earth, you might decline to open a box labeled "this info will make you permanently dissatisfied with sex" if the box was labeled by a prediction market.
+> Trying to avoid inferences seems to me much more internally costly than declining to click on a spoiler box.
+
+I understood the theory, but I was still extremely skpetical of the practice, assuming the eliezera were even remotely human. Yudkowsky described the practice of "keeping BDSM secret and trying to prevent most sadists from discovering what they are—informing them only when and if they become rich enough or famous enough that they'd have a high probability of successfully obtaining a very rare masochist" as a "basically reasonable policy option that [he] might vote for, not to help the poor dear other people, but to help [his] own counterfactual self."
+
+The problem I saw with this is that becoming rich and famous isn't a purely random exogenous event. In order to make an informed decision about whether or not to put in the effort to try to _become_ rich and famous (as contrasted to choosing a lower-risk or more laid-back lifestyle), you need accurate beliefs about the perks of being rich and famous.
+
+The dilemma of whether to make more ambitious economic choices in pusuit of sexual goals was something that _already_ happens to people on Earth, rather than being hypothetical. I once met a trans woman who spent a lot of her twenties and thirties working very hard to get money for various medical procedures. I think she would be worse off under a censorship regime run by self-styled Keepers who thought it was kinder to prevent _poor people_ from learning about the concept of "transsexualism".
+
+Further discussion established that Yudkowsky was (supposedly) already taking into account that class of distortion on individuals' decisions, but that the empirical setting of probabilities and utilities happened to be such that ignorance came out on top.
+
+I wasn't sure what my wordcount and "diplomacy" "budget limits" for the server were, but I couldn't let go; I kept the thread going on subsequent days. There was something I felt I should be able to convey, if I could just find the right words.
+
+When [Word of God](https://tvtropes.org/pmwiki/pmwiki.php/Main/WordOfGod) says, "trying to prevent most [_X_] from discovering what they are [...] continues to strike me as a basically reasonable policy option", then, separately from the particular value of _X_, I expected people to jump out of their chairs and say, "No! This is wrong! Morally wrong! People can stand what is true about themselves, because they are already doing so!"
+
+And to the extent that I was the only person jumping out of my chair, and there was a party-line response of the form, "Ah, but if it's been decreed by authorial fiat that these-and-such probabilities and utilities take such-and-these values, then in this case, self-knowledge is actually bad under the utilitarian calculus," I wasn't disputing the utilitarian calculus. I was wondering—here I used the "🐛" bug emoji customarily used in Glowfic culture to indicate uncertainty about the right words to use—_who destroyed your souls?_
+
+Yudkowsky replied:
+
+> it feels powerfully relevant to me that the people of whom I am saying this _are eliezera_. I get to decide what they'd want because, unlike with Earth humans, I get to put myself in their shoes. it's plausible to me that the prediction markets say that I'd be sadder if I was exposed to the concept of sadism in a world with no masochists. if so, while I wouldn't relinquish my Art and lose my powers by trying to delude myself about that once I'd been told, I'd consider it a friendly act to keep the info from me—_because_ I have less self-delusional defenses than a standard Earthling, really—and a hostile act to tell me; and if you are telling me I don't get to make that decision for myself because it's evil, and if you go around shouting it from the street corners in dath ilan, then yeah I think most cities don't let you in.
+
+I wish I had thought to ask if he'd have felt the same way in 2008.
+
+Ajvermillion was still baffled at my skepticism: if the author specifies that the world of the story is simple in this-and-such direction, on what grounds could I _disagree_?
+
+I admitted, again, that there was a sense in which I couldn't argue with authorial fiat. But I thought that an author's choice of assumptions reveals something about what they think is true in our world, and commenting on that should be fair game for literary critics. Suppose someone wrote a story and said, "in the world portrayed in this story, everyone is super-great at _kung fu_, and they could beat up everyone from our Earth, but they never have to practice at all."
+
+(Yudkowsky retorted, "...you realize you're describing like half the alien planets in comic books? when did Superman ever get depicted as studying kung fu?" I wish I had thought to admit that, yes, I _did_ hold Eliezer Yudkowsky to a higher standard of consilient worldbuilding than DC Comics. Would he rather I _didn't_?)
+
+Something about innate _kung fu_ world seems fake in a way that seems like a literary flaw. It's not just about plausibility. Fiction often incorporates unrealistic elements in order to tell a story that has relevance to real human lives. Innate _kung fu_ skills are scientifically plausible[^instinct] in a way that faster-than-light travel is not, but throwing faster-than-light travel into the universe so that you can do a [space opera](https://tvtropes.org/pmwiki/pmwiki.php/Main/SpaceOpera) doesn't make the _people_ fake in the way that Superman's fighting skills are fake.
+
+[^instinct]: All sorts of other instinctual behaviors exist in animals; I don't se why skills humans have to study for years as a "martial art" couldn't be coded into the genome.
+
+Maybe it was okay for Superman's fighting skills to be fake from a literary perspective (because realism along that dimension is not what Superman is _about_), but if the Yudkowskian ethos exulted intelligence as ["the power that cannot be removed without removing you"](https://www.lesswrong.com/posts/SXK87NgEPszhWkvQm/mundane-magic), readers had grounds to demand that the dath ilani's thinking skills be real, and a world that's claimed by authorial fiat to be super-great at epistemic rationality, but where the people don't have a will-to-truth stronger than their will-to-happiness, felt fake to me. I couldn't _prove_ that it was fake. I agreed with Harmless's case that, _technically_, as far as the Law went, you could build a Civilization or a Friendly AI to see all the ugly things that you preferred not to see.
+
+But if you could—would you? And more importantly, if you would—could you?
+
+It was possible that the attitude I was evincing here was just a difference between the eliezera out of dath ilan and the Zackistani from my medianworld, and that there was nothing more to be said about it. But I didn't think the thing was a _genetic_ trait of the Zackistani! _I_ got it from spending my early twenties obsessively re-reading blog posts that said things like, ["I believe that it is right and proper for me, as a human being, to have an interest in the future [...] One of those interests is the human pursuit of truth [...] I wish to strengthen that pursuit further, in this generation."](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business)
+
+There were definitely communities on Earth where I wasn't allowed in because of my tendency to shout things from street corners, and I respected those people's right to have a safe space for themselves.
+
+But those communities ... didn't call themselves _rationalists_, weren't _pretending_ be to be inheritors of the great tradition of E. T. Jaynes and Robin Dawes and Richard Feynman. And if they _did_, I think I would have a false advertising complaint against them.
+
+"[The eleventh virtue is scholarship. Study many sciences and absorb their power as your own](https://www.yudkowsky.net/rational/virtues) ... unless a prediction market says that would make you less happy," just didn't have the same ring to it. Neither did "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. But higher than both of those, is trusting your Society's institutions to tell you which kinds of knowledge will make you happy"—even if you stipulated by authorial fiat that your Society's institutions are super-competent, such that they're probably right about the happiness thing.
+
+Attempting to illustrate [the mood I thought dath ilan was missing](https://www.econlib.org/archives/2016/01/the_invisible_t.html), I quoted (with Discord's click-to-reveal spoiler blocks around the more plot-relevant sentences) the scene from _Atlas Shrugged_ where our heroine Dagny expresses a wish to be kept ignorant for the sake of her own happiness, and gets shut down by John Galt—and Dagny _thanks_ him.[^atlas-shrugged-ref]
+
+> "[...] Oh, if only I didn't have to hear about it! If only I could stay here and never know what they're doing to the railroad, and never learn when it goes!"