I said that I thought people were missing this idea that the reason "truth is better than lies; knowledge is better than ignorance" is such a well-performing [injunction](https://www.lesswrong.com/posts/dWTEtgBfFaz6vjwQf/ethical-injunctions) in the real world (despite the fact that there's no law of physics preventing lies and ignorance from having beneficial consequences), is because [it protects against unknown unknowns](https://www.lesswrong.com/posts/E7CKXxtGKPmdM9ZRc/of-lies-and-black-swan-blowups). Of course an author who wants to portray an ignorance-maintaining conspiracy as being for the greater good, can assert by authorial fiat whatever details are needed to make it all turn out for the greater good, but _that's not how anything works in real life_.
-I started a new thread to complain about the attitude I was seeing (Subject: "Noble Secrets; Or, Conflict Theory of Optimization on Shared Maps"). When fiction in this world, _where I live_, glorifies Noble Lies, that's a cultural force optimizing for making shared maps less accurate, I explained. As someone trying to make shared maps _more_ accurate, this force was hostile to me and mine. I understood that "secrets" and "lies" are not the same thing, but if you're a consequentialist thinking in terms of what kinds of optimization pressures are being applied to shared maps, [it's the same issue](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist): I'm trying to steer _towards_ states of the world where people know things, and the Keepers of Noble Secrets are trying to steer _away_ from states of the world where people know things. That's a conflict. I was happy to accept Pareto-improving deals to make the conflict less destructive, but I wasn't going to pretend the pro-ignorance forces were my friends just because they self-identified as "rationalists" or "EA"s. I was willing to accept secrets around nuclear or biological weapons, or AGI, on "better ignorant than dead" grounds, but the "protect sadists from being sad" thing wasn't a threat to life; it was _just_ coddling people who can't handle reality, which made _my_ life worse.
+I started a new thread to complain about the attitude I was seeing (Subject: "Noble Secrets; Or, Conflict Theory of Optimization on Shared Maps"). When fiction in this world, _where I live_, glorifies Noble Lies, that's a cultural force optimizing for making shared maps less accurate, I explained. As someone trying to make shared maps _more_ accurate, this force was hostile to me and mine. I understood that "secrets" and "lies" are not the same thing, but if you're a consequentialist thinking in terms of what kinds of optimization pressures are being applied to shared maps, [it's the same issue](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist): I'm trying to steer _towards_ states of the world where people know things, and the Keepers of Noble Secrets are trying to steer _away_ from states of the world where people know things. That's a conflict. I was happy to accept Pareto-improving deals to make the conflict less destructive, but I wasn't going to pretend the pro-ignorance forces were my friends just because they self-identified as "rationalists" or "EA"s. I was willing to accept secrets around nuclear or biological weapons, or AGI, on "better ignorant than dead" grounds, but the "protect sadists from being sad" thing wasn't a threat to anyone's life; it was _just_ coddling people who can't handle reality, which made _my_ life worse.
I wasn't buying the excuse that secret-Keeping practices that wouldn't be okay on Earth were somehow okay on dath ilan, which was asserted by authorial fiat to be sane and smart and benevolent enough to make it work. Alternatively, if I couldn't argue with authorial fiat: the reasons why it would be bad on Earth (even if it wouldn't be bad in the author-assertion paradise of dath ilan) are reasons why _fiction about dath ilan is bad for Earth_.
-And just—back in the 'aughts, I said, Robin Hanson had this really great blog called _Overcoming Bias_. (You probably haven't heard of it.) I wanted that _vibe_ back, of Robin Hanson's blog[^overcoming-bias] in 2008—the will to _just get the right answer_, without all this galaxy-brained hand-wringing about who the right answer might hurt.
+And just—back in the 'aughts, I said, Robin Hanson had this really great blog called _Overcoming Bias_. (You probably haven't heard of it.) I wanted that _vibe_ back, of Robin Hanson's blog in 2008—the will to _just get the right answer_, without all this galaxy-brained hand-wringing about who the right answer might hurt.
-[^overcoming-bias]: _Overcoming Bias_ had actually been a group blog then, but I was enjoying the æsthetic of saying "Robin Hanson's blog" (when what I had actually loved about _Overcoming Bias_ was Yudkowsky's Sequences) as a way of signaling contempt for the Yudkowsky of the current year.
+(_Overcoming Bias_ had actually been a group blog then, but I was enjoying the æsthetic of saying "Robin Hanson's blog" (when what I had actually loved about _Overcoming Bias_ was Yudkowsky's Sequences) as a way of signaling contempt for the Yudkowsky of the current year.)
I would have expected a subculture descended from the memetic legacy of Robin Hanson's blog in 2008 to respond to that tripe about protecting people from the truth being a form of "recognizing independent agency" with something like—
"[The eleventh virtue is scholarship. Study many sciences and absorb their power as your own](https://www.yudkowsky.net/rational/virtues) ... unless a prediction market says that would make you less happy," just didn't have the same ring to it. Neither did "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. But higher than both of those, is trusting your Society's institutions to tell you which kinds of knowledge will make you happy"—even if you stipulated by authorial fiat that your Society's institutions are super-competent, such that they're probably right about the happiness thing.
-Attempting to illustrate the mood I thought dath ilan was missing, I quoted the scene from _Atlas Shrugged_ where our heroine Dagny expresses a wish to be kept ignorant for the sake of her own happiness and gets shut down by Galt—and Dagny _thanks_ him. (I put Discord's click-to-reveal spoiler blocks around plot-relevant sentences—that'll be important in a few moments.)
+Attempting to illustrate the mood I thought dath ilan was missing, I quoted (with Discord's click-to-reveal spoiler blocks around the more plot-relevant sentences) the scene from _Atlas Shrugged_ where our heroine Dagny expresses a wish to be kept ignorant for the sake of her own happiness and gets shut down by Galt—and Dagny _thanks_ him.
> "[...] Oh, if only I didn't have to hear about it! If only I could stay here and never know what they're doing to the railroad, and never learn when it goes!"
>
A user called RationalMoron asked if I was appealing to a terminal value. Did I think people should have accurate self-models even if they don't want to?
-Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???"
+Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???" That the eliezera prefer not to know that there are desirable sexual experiences that they can't have, contradicted April's earlier claim (which had received a Word of God checkmark-emoji) that "it's not that the standards are being dropped it's that there's an even higher standard far beyond what anyone on earth has accomplished".
-Apparently I struck a nerve.
+Apparently I struck a nerve. Yudkowsky started "punching back":
+
+> **Eliezer** — 12/08/2022 12:45 PM
+> Do zacki have no concept of movie spoilers, such that all movies are just designed not to rely on uncertainty for dramatic tension? Do children have to be locked in individual test rooms because they can't comprehend the concept of refusing to look at other children's answer sheets because it's evidence and you should observe it? Do adults refuse to isolate the children so they can have practice problems, because you can't stop them from learning the answer to skill-building problems, only the legendary evil alien eliezera would do that? Obviously they don't have surprise parties.
+> It's noticeably more extreme than the _Invention of Lying_ aliens, who can still have nudity taboos
+> I'd also note that I think in retrospect (only after having typed it) that Zack could not have generated these examples of other places where society refrains from observation, and that I think this means I am tracking the thing Zack fears in a way that Zack cannot because his thinking is distorted and he is arguing rather than seeing; and this, not verbally advocating for "truth", is more what respect for truth really is.
+
+I thought the "you could not have generated the answer I just told you" gambit was a pretty dirty argumentative trick on Yudkowsky's part. (Given that I could, how would I be able to prove it?—this was itself a good use-case for spoilers.)
+
+As it happened, however, I _had_ already considered the case of spoilers as a class of legitimate infohazards, and was prepared to testify that I had already thought of it, and explain why hiding spoilers were relevantly morally different from coverups in my view. The previous night, 7 December 2022, I had had a phone call with Anna Salamon,[^evidence-of-independent-generation] in which (I pretty distinctly remember) citing dath ilan's practice of letting children figure out heliocentrism for themselves as not being objectionable in the way the sadism/masochism coverup was.
+
+[^evidence-of-independent-generation]: I was lucky to be able to point to Anna as a potential witness to defend myself against the "could not have generated" trick—as a matter of principle, not because I seriously expected anyone to go ask Anna if she remembered the conversation the same way.
+
+ I also mentioned that when I had used spoiler blocks on the _Atlas Shrugged_ quote I had posted upthread, I had briefly considered including some kind of side-remark noting that the spoiler blocks were also a form of information-hiding, but couldn't think of anything funny or relevant enough (which, if my self-report could be trusted, showed that I had independently generated the idea spoilers being an example of hiding information—but I didn't expect other people to uncritically believe my self-reports).
+
+It seemed like the rationale for avoiding spoilers of movie plots or homework exercises had to do with the outcome being different if you got spoiled: you have a different æsthetic experience if you experience the plot twist in the 90th minute of the movie rather than the fourth paragraph of the _Wikipedia_ article; you learn more by working out the answer to the homework exercise yourself. Dath ilan's sadism/masochism coverup didn't seem to have the same structure: when I try to prove a theorem myself before looking at how the textbook says to do it, it's not because I would be _sad about the state of the world_ if I looked at the textbook; it's because the temporary ignorance of working it out myself results in a stronger state of final knowledge.
+
+That is, the difference between "spoilers" (sometimes useful) and "coverups" (bad) had to do with whether the ignorant person is expected to eventually uncover the hidden information, and whether the ignorant person knows that there's hidden information that they're expected to uncover. In the case of the sadism/masochism coverup (in contrast to the cases of movie spoilers or homework exercises), it seemed like neither of these conditions pertained. (Keltham knows that the Keepers are keeping secrets, but he seems to actively have beliefs about human psychology that imply masochism is implausible; it seems more like he has a false map, rather than a blank spot on his map for the answer to the homework exercise to be filled in.) I thought that was morally relevant.
+
+(I would have hoped that my two previous mentions in the thread of supporting keeping nuclear, bioweapon, and AI secrets should have already made it clear that I wasn't against _all_ cases of Society hiding information, but to further demonstrate my ability to generate counterexamples, I mentioned that I would also admit _threats_ as a class of legitimate infohazard: if I'm not a perfect decision theorist, I'm better off if Tony Soprano just doesn't have my email to begin with, if I don't trust myself to calculate when I "should" ignore his demands.)
+
+As for the claim that my thinking was distorted and I was arguing instead of seeing, it was definitely true that I was _motivated to look for_ criticisms of Yudkowsky and dath ilan, for personal reasons outside the scope of the server, and I thought it was great for people to notice this and take it into account. I hoped to nevertheless be competent to only report real criticisms and not fake criticisms. (Whether I succeeded, of course, was up to the reader to decide.)
[TODO: Yudkowsky tests me]