+But those communities ... didn't call themselves _rationalists_, weren't _pretending_ be to be inheritors of the great tradition of E. T. Jaynes and Robin Dawes and Richard Feynman. And if they _did_, I think I would have a false advertising complaint against them.
+
+"[The eleventh virtue is scholarship. Study many sciences and absorb their power as your own](https://www.yudkowsky.net/rational/virtues) ... unless a prediction market says that would make you less happy," just didn't have the same ring to it. Neither did "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. But higher than both of those, is trusting your Society's institutions to tell you which kinds of knowledge will make you happy"—even if you stipulated by authorial fiat that your Society's institutions are super-competent, such that they're probably right about the happiness thing.
+
+Attempting to illustrate [the mood I thought dath ilan was missing](https://www.econlib.org/archives/2016/01/the_invisible_t.html), I quoted (with Discord's click-to-reveal spoiler blocks around the more plot-relevant sentences) the scene from _Atlas Shrugged_ where our heroine Dagny expresses a wish to be kept ignorant for the sake of her own happiness, and gets shut down by John Galt—and Dagny _thanks_ him.[^atlas-shrugged-ref]
+
+> "[...] Oh, if only I didn't have to hear about it! If only I could stay here and never know what they're doing to the railroad, and never learn when it goes!"
+>
+> "You'll have to hear about it," said Galt; it was that ruthless tone, peculiarly his, which sounded implacable by being simple, devoid of any emotional value, save the quality of respect for facts. "You'll hear the whole course of the last agony of Taggart Transcontinental. You'll hear about every wreck. You'll hear about every discontinued train. You'll hear about every abandoned line. You'll hear about the collapse of the Taggart Bridge. Nobody stays in this valley except by a full, conscious choice based on a full, conscious knowledge of every fact involved in his decision. Nobody stays here by faking reality in any manner whatever."
+>
+> She looked at him, her head lifted, knowing what chance he was rejecting. She thought that no man of the outer world would have said this to her at this moment—she thought of the world's code that worshipped white lies as an act of mercy—she felt a stab of revulsion against that code, suddenly seeing its full ugliness for the first time [...] she answered quietly, "Thank you. You're right."
+
+[^atlas-shrugged-ref]: In Part Three, Chapter II, "The Utopia of Greed".
+
+This (probably predictably) failed to resonate with other server participants, who were baffled why I seemed to be appealing to Ayn Rand's authority.
+
+I was actually going for a _reverse_ appeal-to-authority: if _Ayn Rand_ understood that facing reality is virtuous, why didn't the 2020s "rationalists"? Wasn't that undignified? I didn't think the disdain for "Earth people" (again, as if there were any other kind) was justified, when Earth's philosophy of rationality (as exemplified by Ayn Rand or Robert ["Get the Facts"](https://www.goodreads.com/quotes/38764-what-are-the-facts-again-and-again-and-again) Heinlein) was doing better than dath ilan's on this critical dimension.
+
+But if people's souls had been damaged such that they didn't have the "facing reality is virtuous" gear, it wasn't easy to install the gear by talking at them.
+
+Why was I so sure _my_ gear was correct?
+
+I wondered if the issue had to do with what Yudkowsky had [identified as the problem of non-absolute rules](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases#5__Counterargument__The_problem_of_non_absolute_rules_), where not-literally-absolute rules like "Don't kill" or "Don't lie" have to be stated _as if_ they were absolutes in order to register to the human motivational system with sufficient force.
+
+Technically, as a matter of decision theory, "sacred values" are crazy. It's easy to say—and feel with the passion of religious conviction—that it's always right to choose Truth and Life, and that no one could choose otherwise except wrongly, in the vile service of Falsehood and Death. But reality presents us with quantitative choices over uncertain outcomes, in which everything trades off against everything else under the [von Neumann–Morgenstern axioms](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem); if you had to choose between a small, unimportant Truth and the Life of millions, you'd probably choose Life—but more importantly, the very fact that you might have to choose, means that Truth and Life can't both be infinitely sacred to you, and must be measured on a common scale with lesser goods like mere Happiness.
+
+I knew that. The other people in the chatroom knew that. So to the extent that the argument amounted to me saying "Don't lie" (about the existence of masochism), and them saying "Don't lie unless the badness of lying is outweighed by the goodness of increased happiness", why was I so confident that I was in the right, when they were wisely acknowledging the trade-offs under the Law, and I was sticking to my (incoherent) sacred value of Truth? Didn't they obviously have the more sophisticated side of the argument?
+
+The problem was that, in my view, the people who weren't talking about Truth as if it were a sacred value were being _wildly recklessly casual_ about harms from covering things up, as if they didn't see the non-first-order harms _at all_. I felt I had to appeal to the lessons for children about how Lying Is Bad, because if I tried to make a more sophisticated argument about it being _quantitatively_ crazy to cover up psychology facts that make people sad, I would face a brick wall of "authorial fiat declares that the probabilities and utilities are specifically fine-tuned such that ignorance is good".
+
+Even if you specified by authorial fiat that "latent sadists could use the information to decide whether or not to try to become rich and famous" didn't tip the utility calculus in itself, [facts are connected to each other](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies); there were _more consequences_ to the coverup, more ways in which better-informed people could make better decisions than worse-informed people.
+
+What about the costs of all the other recursive censorship you'd have to do to keep the secret? (If a biography mentioned masochism in passing along with many other traits of the subject, you'd need to either censor the paragraphs with that detail, or censor the whole book. Those are real costs, even under a soft-censorship regime where people can give special consent to access "Ill Advised" products.) Maybe latent sadists could console themselves with porn if they knew, or devote their careers to making better sex robots, just as people on Earth with non-satisfiable sexual desires manage to get by. (I _knew some things_ about this topic.) What about dath ilan's heritage optimization (read: eugenics) program? Are they going to try to breed more masochists, or fewer sadists, and who's authorized to know that? And so on.
+
+A user called RationalMoron asked if I was appealing to a terminal value. Did I think people should have accurate self-models even if they didn't want to?
+
+Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized that this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???" That the eliezera prefer not to know that there are desirable sexual experiences that they can't have, contradicted April's earlier claim (which had received a Word of God checkmark-emoji) that "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished".
+
+Apparently I struck a nerve. Yudkowsky started "punching back":
+
+> **Eliezer** — 12/08/2022 12:45 PM
+> Do zacki have no concept of movie spoilers, such that all movies are just designed not to rely on uncertainty for dramatic tension? Do children have to be locked in individual test rooms because they can't comprehend the concept of refusing to look at other children's answer sheets because it's evidence and you should observe it? Do adults refuse to isolate the children so they can have practice problems, because you can't stop them from learning the answer to skill-building problems, only the legendary evil alien eliezera would do that? Obviously they don't have surprise parties.
+> It's noticeably more extreme than the _Invention of Lying_ aliens, who can still have nudity taboos
+> I'd also note that I think in retrospect (only after having typed it) that Zack could not have generated these examples of other places where society refrains from observation, and that I think this means I am tracking the thing Zack fears in a way that Zack cannot because his thinking is distorted and he is arguing rather than seeing; and this, not verbally advocating for "truth", is more what respect for truth really is.
+
+I thought the "you could not have generated the answer I just told you" gambit was a pretty dirty argumentative trick on Yudkowsky's part. (Given that I could, how would I be able to prove it?—this was itself a good use-case for concealing spoilers.)
+
+As it happened, however, I _had_ already considered the case of spoilers as a class of legitimate infohazards, and was prepared to testify that I had already thought of it, and explain why I thought hiding spoilers were relevantly morally different from the coverups I was objecting to. The previous night, 7 December 2022, I had had a phone call with Anna Salamon,[^evidence-of-independent-generation] in which I (remembered that I) had cited dath ilan's [practice of letting children figure out heliocentrism for themselves](https://www.glowfic.com/replies/1777588#reply-1777588) as not being objectionable in the way the sadism/masochism coverup was.
+
+[^evidence-of-independent-generation]: I was lucky to be able to point to Anna as a potential witness to defend myself against the "could not have generated" trick—as a matter of principle, not because I seriously expected anyone to care enough to go ask Anna if she remembered the conversation the same way.
+
+ I also mentioned that when I had used spoiler blocks on the _Atlas Shrugged_ quote I had posted upthread, I had briefly considered making some kind of side-remark noting that the spoiler blocks were also a form of information-hiding, but couldn't think of anything funny or relevant enough (which, if my self-report could be trusted, showed that I had independently generated the idea of spoilers being an example of hiding information—but I didn't expect other people to uncritically believe my self-reports).
+
+It seemed like the rationale for avoiding spoilers of movie plots or homework exercises had to do with the outcome being different if you got spoiled: you have a different æsthetic experience if you experience the plot twist in the 90th minute of the movie rather than the fourth paragraph of the _Wikipedia_ article. Dath ilan's sadism/masochism coverup didn't seem to have the same structure: when I try to prove a theorem myself before looking at how the textbook says to do it, it's not because I would be _sad about the state of the world_ if I looked at the textbook; it's because the temporary ignorance of working it out myself results in a stronger state of final knowledge.
+
+That is, the difference between "spoilers" (sometimes useful) and "coverups" (bad) had to do with whether the ignorant person is expected to eventually uncover the hidden information, and whether the ignorant person knows that there's hidden information that they're expected to uncover. In the case of the sadism/masochism coverup (in contrast to the cases of movie spoilers or homework exercises), it seemed like neither of these conditions pertained. (Keltham knows that the Keepers are keeping secrets, but he seems to actively have beliefs about human psychology that imply masochism is implausible; it seems more like he has a false map, rather than a blank spot on his map for the answer to the homework exercise to be filled in.) I thought that was morally relevant.
+
+(Additionally, I would have hoped that my two previous mentions in the thread of supporting keeping nuclear, bioweapon, and AI secrets should have already made it clear that I wasn't against _all_ cases of Society hiding information, but to further demonstrate my ability to generate counterexamples, I mentioned that I would also admit _threats_ as a class of legitimate infohazard: if I'm not a perfect decision theorist, I'm better off if Tony Soprano just doesn't have my email address to begin with, if I don't trust myself to calculate when I "should" ignore his demands.)
+
+As for the claim that my thinking was distorted and I was arguing instead of seeing, it was definitely true that I was _motivated to look for_ criticisms of Yudkowsky and dath ilan, for personal reasons outside the scope of the server, and I thought it was great for people to notice this and take it into account. I hoped to nevertheless be competent to only report real criticisms and not fake criticisms. (Whether I succeeded, of course, was up to the reader to decide.)
+
+Yudkowsky replied: