The other chatroom participants mostly weren't buying what I was selling.
-A user called April wrote that "the standard dath ilani has internalized almost everything in the sequences": "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished". (This received a checkmark emoji-react from Yudkowsky, an indication of his agreement.)
+A user called April wrote that "the standard dath ilani has internalized almost everything in the sequences": "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished". (This received a checkmark emoji-react from Yudkowsky, an indication of his agreement/endorsement.)
Someone else said he was "pretty leery of 'ignore whether models are painful' as a principle, for Earth humans to try to adopt," and went on to offer some thoughts for Earth. I continued to maintain that it was ridiculous that we were talking of "Earth humans" as if there were any other kind—as if rationality in the Yudkowskian tradition wasn't something to aspire to in real life.
It turned out that I was lying about probably not talking in the server anymore. (Hedging the word "probably" didn't make the claim true, and of course I wasn't _consciously_ lying, but that hardly seems exculpatory.)
-The thread went on.
+The next day, I belatedly pointed out that "Keltham thought that not learning about masochists he can never have, was obviously in retrospect what he'd have wanted Civilization to do" seemed to contradict "one thing hasn't changed: the message that you, yourself, should always be trying to infer the true truth". In the first statement, it didn't sound like Keltham thinks it's good that Civilization didn't tell him so that he could figure it how for himself (in accordance with the discipline of "you, yourself, always trying to infer the truth"). It sounded like he was better off not knowing—better off having a _less accurate self-model_ (not having the concept fo "obligate romantic sadism"), better off having a _less accurate world-model_ (thinking that masochism isn't real).
+
+In response to someone positing that dath ilani were choosing to be happier but less accurate predictors, I said that I read a blog post once about why you actually didn't want to do that, linking to [an Internet Archive copy of "Doublethink (Choosing to Be Biased)"](https://web.archive.org/web/20080216204229/https://www.overcomingbias.com/2007/09/doublethink-cho.html) from 2008[^hanson-conceit]—at least, that was _my_ attempted paraphrase; it was possible that I'd extracted a simpler message from it than the author intended.
+
+[^hanson-conceit]: I was really enjoying the "Robin Hanson's blog in 2008" conceit.
+
+A user called Harmless explained the loophole. "Doublethink" was pointing out that decisions that optimize the world for your preferences can't come from nowhere: if you avoid painful thoughts in your map, you damage your ability to steer away from painful outcomes in the territory. However, there was no rule that all the information-processing going into decisions that optimize the world for your preferences had to take place in _your brain_ ...
+
+I saw where they were going and completed the thought: you could build a Friendly AI or a Civilization to see all the dirty things for you, that would make you unhappy to have to see yourself.
+
+Yudkowsky clarified his position:
+
+> My exact word choices often do matter: I said that you should always be trying to infer the truth. With the info you already have. In dath ilan if not in Earth, you might decline to open a box labeled "this info will make you permanently dissatisfied with sex" if the box was labeled by a prediction market.
+> Trying to avoid inferences seems to me much more internally costly than declining to click on a spoiler box.
+
+I understood the theory, but I was still extremely skpetical of the practice, assuming the eliezera were even remotely human. Yudkowsky described the practice of "keeping BDSM secret and trying to prevent most sadists from discovering what they are—informing them only when and if they become rich enough or famous enough that they'd have a high probability of successfully obtaining a very rare masochist" as a "basically reasonable policy option that [he] might vote for, not to help the poor dear other people, but to help [his] own counterfactual self."
+
+The problem I saw with this is that becoming rich and famous isn't a purely random exogenous event. In order to make an informed decision about whether or not to put in the effort to try to _become_ rich and famous (as contrasted to choosing a lower-risk or more laid-back lifestyle), you need accurate beliefs about the perks of being rich and famous.
+
+The dilemma of whether to make more ambitious economic choices in pusuit of sexual goals was something that _already_ happens to people on Earth, rather than being hypothetical. I once met a trans woman who spent a lot of her twenties and thirties working very hard to get money for various medical procedures. I think she would be worse off under a censorship regime run by self-styled Keepers who thought it was kinder to prevent _poor people_ from learning about the concept of "transsexualism".
+
+Further discussion established that Yudkowsky was (supposedly) already taking into account the distortion on individuals' decisions, but that the empirical setting of probabilities and utilities happened to be such that ignorance came out on top.
+
+I wasn't sure what my wordcount and diplomacy budget limits for the server were, but I couldn't let go; I kept the thread going on subsequent days. There was something I felt I should be able to convey, if I could just find the right words.
+
+When Word of God says, "trying to prevent most [_X_] from discovering what they are [...] continues to strike me as a basically reasonable policy option", then, separately from the particular value of _X_, I expected people to jump out of their chairs and say, "No! This is wrong! Morally wrong! People can stand what is true about themselves, because they are already doing so!"
+
+And to the extent that I was the only person jumping out of my chair, and there was a party-line response of the form, "Ah, but if it's been decreed by authorial fiat that these-and-such probabilities and utilities take such-and-these values, then in this case, self-knowledge is actually bad under the utilitarian calculus," I wasn't disputing the utilitarian calculus. I was wondering—here I used the "bug" emoji customarily used on Glowfic and adjacent servers to indicate uncertainty about the right words to use—_who destroyed your souls?_
+
+Yudkowsky replied:
+
+> it feels powerfully relevant to me that the people of whom I am saying this are eliezera. I get to decide what they'd want because, unlike with Earth humans, I get to put myself in their shoes. it's plausible to me that the prediction markets say that I'd be sadder if I was exposed to the concept of sadism in a world with no masochists. if so, while I wouldn't relinquish my Art and lose my powers by trying to delude myself about that once I'd been told, I'd consider it a friendly act to keep the info from me—because I have less self-delusional defenses than a standard Earthling, really—and a hostile act to tell me; and if you are telling me I don't get to make that decision for myself because it's evil, and if you go around shouting it from the street corners in dath ilan, then yeah I think most cities don't let you in.
+
+I wish I had thought to ask if he'd have felt the same way in 2008.
+
[TODO: regrets and wasted time
* Do I have regrets about this Whole Dumb Story? A lot, surely—it's been a lot of wasted time. But it's also hard to say what I should have done differently; I could have listened to Ben more and lost faith Yudkowsky earlier, but he had earned a lot of benefit of the doubt?