-It's not news about _humans_, I conceded. It was just—I thought people who were fans of Yudkowsky's writing in 2008 had a reasonable expectation that the dominant messaging in the local subculture would continue in 2022 to be _in favor_ of telling the truth and _against_ benevolently intended Noble Lies. It ... would be interesting to know why that changed.
-
-Someone else said:
-
-> dath ilan is essentially a paradise world. In a paradise world, people have the slack to make microoptimisations like that, to allow themselves Noble Lies and not fear for what could be hiding in the gaps. Telling the truth is a heuristic for this world where Noble Lies are often less Noble than expected and trust is harder to come by.
-
-I said that I thought people were missing this idea that the reason "truth is better than lies; knowledge is better than ignorance" is such a well-performing [injunction](https://www.lesswrong.com/posts/dWTEtgBfFaz6vjwQf/ethical-injunctions) in the real world (despite the fact that there's no law of physics preventing lies and ignorance from having beneficial consequences), is because [it protects against unknown unknowns](https://www.lesswrong.com/posts/E7CKXxtGKPmdM9ZRc/of-lies-and-black-swan-blowups). Of course an author who wants to portray an ignorance-maintaining conspiracy as being for the greater good, can assert by authorial fiat whatever details are needed to make it all turn out for the greater good, but _that's not how anything works in real life_.
-
-I started a new thread to complain about the attitude I was seeing (Subject: "Noble Secrets; Or, Conflict Theory of Optimization on Shared Maps"). When fiction in this world, _where I live_, glorifies Noble Lies, that's a cultural force optimizing for making shared maps less accurate, I explained. As someone trying to make shared maps _more_ accurate, this force was hostile to me and mine. I understood that "secrets" and "lies" are not the same thing, but if you're a consequentialist thinking in terms of what kinds of optimization pressures are being applied to shared maps, [it's the same issue](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist): I'm trying to steer _towards_ states of the world where people know things, and the Keepers of Noble Secrets are trying to steer _away_ from states of the world where people know things. That's a conflict. I was happy to accept Pareto-improving deals to make the conflict less destructive, but I wasn't going to pretend the pro-ignorance forces were my friends just because they self-identified as "rationalists" or "EA"s. I was willing to accept secrets around nuclear or biological weapons, or AGI, on "better ignorant than dead" grounds, but the "protect sadists from being sad" thing wasn't a threat to anyone's life; it was _just_ coddling people who can't handle reality, which made _my_ life worse.
-
-I wasn't buying the excuse that secret-Keeping practices that wouldn't be okay on Earth were somehow okay on dath ilan, which was asserted by authorial fiat to be sane and smart and benevolent enough to make it work. Alternatively, if I couldn't argue with authorial fiat: the reasons why it would be bad on Earth (even if it wouldn't be bad in the author-assertion paradise of dath ilan) are reasons why _fiction about dath ilan is bad for Earth_.
-
-And just—back in the 'aughts, I said, Robin Hanson had this really great blog called _Overcoming Bias_. (You probably haven't heard of it.) I wanted that _vibe_ back, of Robin Hanson's blog in 2008—the will to _just get the right answer_, without all this galaxy-brained hand-wringing about who the right answer might hurt.
-
-(_Overcoming Bias_ had actually been a group blog then, but I was enjoying the æsthetic of saying "Robin Hanson's blog" (when what I had actually loved about _Overcoming Bias_ was Yudkowsky's Sequences) as a way of signaling contempt for the Yudkowsky of the current year.)
-
-I would have expected a subculture descended from the memetic legacy of Robin Hanson's blog in 2008 to respond to that tripe about protecting people from the truth being a form of "recognizing independent agency" with something like—
-
-"Hi! You must be new here! Regarding your concern about truth doing harm to people, a standard reply is articulated in the post ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased). Regarding your concern about recognizing independent agency, a standard reply is articulated in the post ["Your Rationality Is My Business"](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business)."
-
-—or _something like that_. Not that the reply needed to use those particular Sequences links, or _any_ Sequences links; what's important is that someone needed to counter to this very obvious [anti-epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology).
-
-And what we actually saw in response to the "You don't get to do harm to other people" message was ... it got 5 "+1" emoji-reactions.
-
-Yudkowsky [chimed in to point out that](/images/yudkowsky-it_doesnt_say_tell_other_people.png) "Doublethink" was about _oneself_ not reasonably being in the epistemic position of knowing that one should lie to oneself. It wasn't about telling the truth to _other_ people.
-
-On the one hand, fair enough. My generalization from "you shouldn't want to have false beliefs for your own benefit" to "you shouldn't want other people to have false beliefs for their own benefit" (and the further generalization to it being okay to intervene) was not in the text of the post itself. It made sense for Yudkowsky to refute my misinterpretation of the text he wrote.
-
-On the other hand—given that he was paying attention to this #overflow thread anyway, I might have naïvely hoped that he would appreciate what I was trying to do?—that, after the issue had been pointed out, he would decided that he _wanted_ his chatroom to be a place where we don't want other people to have false beliefs for their own benefit?—a place that approves of "meddling" in the form of _telling people things_.
-
-The other chatroom participants mostly weren't buying what I was selling.
-
-A user called April wrote that "the standard dath ilani has internalized almost everything in the sequences": "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished". (This received a checkmark emoji-react from Yudkowsky, an indication of his agreement/endorsement.)
-
-Someone else said he was "pretty leery of 'ignore whether models are painful' as a principle, for Earth humans to try to adopt," and went on to offer some thoughts for Earth. I continued to maintain that it was ridiculous that we were talking of "Earth humans" as if there were any other kind—as if rationality in the Yudkowskian tradition wasn't something to aspire to in real life.
-
-Dath ilan [is _fiction_](https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional), I pointed out. Dath ilan _does not exist_. I thought it was a horrible distraction to try to see our world through Thellim's eyes and feel contempt over how much better things must be on dath ilan (which, to be clear, again, _does not exist_), when one could be looking through the eyes of an ordinary reader of Robin Hanson's blog in 2008 (the _real_ 2008, which _actually happened_), and seeing everything we've lost.
-
-[As it was taught to me then](https://www.lesswrong.com/posts/iiWiHgtQekWNnmE6Q/if-you-demand-magic-magic-won-t-help): if you demand Keepers, _Keepers won't help_. If I'm going to be happy anywhere, or achieve greatness anywhere, or learn true secrets anywhere, or save the world anywhere, or feel strongly anywhere, or help people anywhere—I may as well do it _on Earth_.
-
-The thread died out soon enough. I had some more thoughts about dath ilan's predilection for deception, of which I typed up some notes for maybe adapting into a blog post later, but there was no point in wasting any more time on Discord.
-
-On 29 November 2022 (four years and a day after the "hill of meaning in defense of validity" Twitter performance that had ignited my rationalist civil war), Yudkowsky remarked about the sadism coverup again:
-
-> Keltham is a romantically obligate sadist. This is information that could've made him much happier if masochists had existed in sufficient supply; Civilization has no other obvious-to-me-or-Keltham reason to conceal it from him.
-
-Despite the fact that there was no point in wasting any more time on Discord, I decided not to resist the temptation to open up the thread again and dump some paragraphs from my notes on the conspiracies of dath ilan.
-
-If we believe that [IQ research validates the "Jews are clever" stereotype](https://web.mit.edu/fustflum/documents/papers/AshkenaziIQ.jbiosocsci.pdf), I wondered if there's a distinct (albeit probably correlated) "enjoying deception" trait that validates the "Jews are sneaky" stereotype? If dath ilan is very high in this "sneakiness" trait (relative to Earth Jews), that would help explain all the conspiracies![^edgy-anti-semitism]
-
-[^edgy-anti-semitism]: It probably would have been possible to bring up the sneakiness-trait hypothesis in a less edgy way, but I didn't care to.
-
-Not-actually-plausible conspiracies that everyone is in on (like "Sparashki are real") are a [superstimulus](https://www.lesswrong.com/posts/Jq73GozjsuhdwMLEG/superstimuli-and-the-collapse-of-western-civilization) like zero-calorie sweetener: engineered to let everyone enjoy the thrill of lying, without doing any real damage to shared maps.