+Ajvermillion was still baffled at my skepticism: if the author specifies that the world of the story is simple in this-and-such direction, on what grounds could I _disagree_?
+
+I admitted, again, that there was a sense in which I couldn't argue with authorial fiat. But I thought that an author's choice of assumptions reveals something about what they think is true in our world, and commenting on that should be fair game for literary critics. Suppose someone wrote a story and said, "in the world portrayed in this story, everyone is super-great at _kung fu_, and they could beat up everyone from our Earth, but they never have to practice at all."
+
+(Yudkowsky retorted, "...you realize you're describing like half the alien planets in comic books? when did Superman ever get depicted as studying kung fu?" I wish I had thought to admit that, yes, I _did_ hold Eliezer Yudkowsky to a higher standard of consilient worldbuilding than DC Comics. Would he rather I _didn't_?)
+
+Something about innate _kung fu_ world seems fake in a way that seems like a literary flaw. It's not just about plausibility. Innate _kung fu_ skills are scientifically plausible[^instinct] in a way that faster-than-light travel is not. Fiction incorporates unrealistic elements in order to tell a story that has relevace to real human lives. Throwing faster-than-light travel into the universe so that you can do a [space opera](https://tvtropes.org/pmwiki/pmwiki.php/Main/SpaceOpera) doesn't make the _people_ fake in the way that Superman's fighting skills are fake.
+
+[^instinct]: All sorts of other instinctual behaviors exist in animals; I don't se why skills humans have to study for years as a "martial art" couldn't be coded into the genome.
+
+Similarly, a world that's claimed by authorial fiat to be super-great at epistemic rationality, but where the people don't have a will-to-truth stronger than their will-to-happiness, felt fake to me. I couldn't _prove_ that it was fake. I agreed with Harmless's case that, _technically_, as far as the Law went, you could build a Civilization or a Friendly AI to see all the ugly things that you preferred not to see.
+
+But if you could—would you? And more importantly, if you would—could you?
+
+It was possible that the attitude I was evincing here was just a difference between the eliezera out of dath ilan and the Zackistani from my medianworld, and that there's nothing more to be said about it. But I didn't think the thing was a _genetic_ trait of the Zackistani! _I_ got it from spending my early twenties obsessively re-reading blog posts that said things like, ["I believe that it is right and proper for me, as a human being, to have an interest in the future [...] One of those interests is the human pursuit of truth [...] I wish to strengthen that pursuit further, in this generation."](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business)
+
+There were definitely communities on Earth where I wasn't allowed in because of my tendency to shout things from street corners, and I respected those people's right to have a safe space for themselves.
+
+But those communities ... didn't call themselves _rationalists_, weren't _pretending_ be to be inheritors of the great tradition of E. T. Jaynes and Robin Dawes and Richard Feynman. And if they _did_, I think I would have a false advertising complaint against them.
+
+"[The eleventh virtue is scholarship. Study many sciences and absorb their power as your own](https://www.yudkowsky.net/rational/virtues) ... unless a prediction market says that would make you less happy," just didn't have the same ring to it. Neither did "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. But higher than both of those, is trusting your Society's institutions to tell you which kinds of knowledge will make you happy"—even if you stipulated by authorial fiat that your Society's institutions are super-competent, such that they're probably right about the happiness thing.
+
+Attempting to illustrate the mood I thought dath ilan was missing, I quoted the scene from _Atlas Shrugged_ where our heroine Dagny expresses a wish to be kept ignorant for the sake of her own happiness and get shut down by Galt—and Dagny _thanks_ him. (I put Discord's click-to-reveal spoiler blocks around plot-relevant sentences—that'll be important in a few moments.)
+
+> "[...] Oh, if only I didn't have to hear about it! If only I could stay here and never know what they're doing to the railroad, and never learn when it goes!"
+>
+> "You'll have to hear about it," said Galt; it was that ruthless tone, peculiarly his, which sounded implacable by being simple, devoid of any emotional value, save the quality of respect for facts. "You'll hear the whole course of the last agony of Taggart Transcontinental. You'll hear about every wreck. You'll hear about every discontinued train. You'll hear about every abandoned line. You'll hear about the collapse of the Taggart Bridge. Nobody stays in this valley except by a full, conscious choice based on a full, conscious knowledge of every fact involved in his decision. Nobody stays here by faking reality in any manner whatever."
+>
+> She looked at him, her head lifted, knowing what chance he was rejecting. She thought that no man of the outer world would have said this to her at this moment—she thought of the world's code that worshipped white lies as an act of mercy—she felt a stab of revulsion against that code, suddenly seeing its full ugliness for the first time [...] she answered quietly, "Thank you. You're right."
+
+This (probably predictably) failed to resonate with other server participants, who were baffled why I seemed to be appealing to Ayn Rand's authority. But I was actually going for a _reverse_ appeal-to-authority: if _Ayn Rand_ understood that facing reality is virtuous, why didn't the 2020's "rationalists"? Wasn't that undignified? I didn't think the disdain for "Earth people" (again, as if there were any other kind) was justified, when Earth's philosophy of rationality (as exemplified by Ayn Rand or Robert ["Get the Facts"](https://www.goodreads.com/quotes/38764-what-are-the-facts-again-and-again-and-again) Heinlein) was doing better than dath ilan's on this critical dimension.
+
+But if people's souls had been damaged such that they didn't have the "facing reality is virtuous" gear, it wasn't easy to install the gear by talking at them.
+
+Why was I so sure _my_ gear was correct?
+
+I wondered if the issue had to do with what Yudkowsky had [identified as the problem of non-absolute rules](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases#5__Counterargument__The_problem_of_non_absolute_rules_), where not-literally-absolute rules like "Don't kill" or "Don't lie" have to be stated _as if_ they were absolutes in order to register to the human motivational system with sufficient force.
+
+Technically, as a matter of decision theory, "sacred values" are crazy. It's easy to say—and feel with the passion of religious conviction—that it's always right to choose Truth and Life, and that no one could choose otherwise except wrongly, in the vile service of Falsehood and Death. But reality presents us with quantitative choices over uncertain outcomes, in which everything trades off against everything else under the [von Neumann–Morgenstern axioms](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem); if you had to choose between a small, unimportant Truth and the Life of millions, you'd probably choose Life—but more importantly, the very fact that you might have to choose, means that Truth and Life can't both be infinitely sacred to you, and must be measured on a common scale with lesser goods like mere Happiness.
+
+I knew that. The other people in the chatroom knew that. So to the extent that the argument amounted to me saying "Don't lie" (about the existence of masochism), and them saying "Don't lie unless the badness of lying is outweighed by the goodness of increased happiness", why was I so confident that I was in the right, when they were wisely acknowledging the trade-offs under the Law, and I was sticking to my (incoherent) sacred value of Truth? Didn't they obviously have the more sophisticated side of the argument?
+
+The problem was that, in my view, the people who weren't talking about Truth as if it were a sacred value were being _wildly recklessly casual_ about harms from covering things up, as if they didn't see the non-first-order harms _at all_. I felt I had to appeal to the lessons for children about how Lying Is Bad, because if I tried to make a more sophisticated argument about it being _quantitatively_ crazy to cover up psychology facts that make people sad, I would face a brick wall of "authorial fiat declares that the probabilities and utilities are specifically fine-tuned such that ignorance is good".
+
+Even if you specified by authorial fiat that "latent sadists could use the information to decide whether or not to try to become rich and famous" didn't tip the utility calculus in itself, [facts are connected to each other](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies), there were _more consequences_ to the coverup, more ways in which better-informed people could make better decisions than worse informed people.
+
+What about the costs of all the other recursive censorship you'd have to do to keep the secret? (If a biography mentioned masochism in passing along with many other traits of the subject, you'd need to either censor the paragraphs with that detail, or censor the whole book. Those are real costs, even under a soft-censorship regime where people can give special consent to access "Ill Advised" products.) Maybe latent sadists could console themselves with porn if they knew, or devote their careers to making better sex robots, just as people on Earth with non-satisfiable sexual desires manage to get by. (I _knew some things_ about this topic.) What about dath ilan's "heritage optimization" (eugenics) program? Are they going to try to breed more masochists, or fewer sadists, and who's authorized to know that? And so on.
+
+A user called RationalMoron asked if I was appealing to a terminal value. Did I think people should have accurate self-models even if they don't want to?
+
+Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???"
+
+Apparently I struck a nerve.
+
+[TODO: Yudkowsky tests me]
+
+[TODO: derail with Lintamande]
+
+[TODO: knives, and showing myself out]
+
+------
+
+Anyway, that—briefly (I mean it)—is the Whole Dumb Story about how I wasted the last seven years of my life. It's probably not that interesting? Life goes on—for now. My dayjob contract expired at the end of 2022. In 2023, I've been finishing up this memoir, and posting some other ideas to _Less Wrong_. (I got into another slapfight about me being un-collaborative, which is not interesting enough to summarize.)
+
+After this, the AI situation is looking worrying enough, that I'm thinking I should try to do some more direct xrisk-reduction work, although I haven't definitely selected any particular job or project. (It probably won't matter, but it will be dignified.) Now that the shape of the threat is on the horizon, I think I'm less afraid of being directly involved. Something about having large language models to study in the 'twenties is—grounding, compared to the superstitious fears of the paperclip boogeyman of my nightmares in the 'teens.
+
+Like all intellectuals, as a teenager I imagined that I would write a book. It was always going to be about gender, but I was vaguely imagining a novel, which never got beyond vague imaginings. That was before the Sequences. I'm 35 years old now. I think my intellectual life has succeeded in ways I didn't know how to imagine, before. I think my past self would be proud of this blog—140,000 words of blog posts stapled together is _morally_ a book—once he got over the shock of heresy.