From: M. Taylor Saotome-Westlake Date: Tue, 14 Mar 2023 00:16:29 +0000 (-0700) Subject: memoir: children's lessons without very advanced math X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=963aec6c2b6c7ec03982c1793ef105999077d450;p=Ultimately_Untrue_Thought.git memoir: children's lessons without very advanced math --- diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index 8b1ef45..8a734f5 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -748,7 +748,10 @@ I didn't have that response thought through in real time. At the time, I just ag > **zackmdavis** — 11/29/2022 10:39 PM > It is! -> I'm not done working through the hate-warp +> I'm not done working through the hate-warp + +(This being a reference to part of _Planecrash_ in which [Keltham tells Carissa to be aware of her un-dath ilani tendency to feel "hatred that warps reality to be more hateable"](https://www.glowfic.com/replies/1882822#reply-1882822).) + > **Eliezer** — 11/29/2022 10:40 PM > so one thing hasn't changed: the message that you, yourself, should always be trying to infer the true truth, off the information you already have. > if you know you've got a hate-warp I don't know why you're running it and not trying to correct for it @@ -761,13 +764,15 @@ I didn't have that response thought through in real time. At the time, I just ag > it looks from inside here that the thing I'm not healed from is the thing where, as Oliver Habryka put it, I "should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false" ([https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY](https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY)), and Michael and Ben and Jessica were _really_ helpful for orienting me to that particular problem, even if I disagree with them about a lot of other things and they seem crazy in other ways > (rule thinkers in, not out) -I was pleased to get the link to Habryka's comment in front of Yudkowsky, if he hadn't already seen it. +(I was pleased to get the link to Habryka's comment in front of Yudkowsky, if he hadn't already seen it.) > **Eliezer** — 11/29/2022 10:55 PM > the most harm they did you was to teach you to see malice where you should have seen mortality > noninnocent error is meaningfully different from innocent error; and noninnocent error is meaningfully different from malice -> Keltham deduced the lack of masochists in dath ilan by asking the question, "Why would Civilization have kept this information from me?", _ruling out_ or actually not even thinking of such ridiculous hypotheses as "Because it was fun", and settling on the obvious explanation that explained _why Keltham would have wanted Civilization to do that for him_—masochists not existing or being incredibly rare and unaffordable to him. You looked at this and saw malice everywhere; you couldn't even see _the fictional world_ the author was trying to give you _direct description about_. You didn't say that you disbelieved in the world; you could not see what was being _described_. -> +> Keltham deduced the lack of masochists in dath ilan by asking the question, "Why would Civilization have kept this information from me?", _ruling out_ or actually not even thinking of such ridiculous hypotheses as "Because it was fun", and settling on the obvious explanation that explained _why Keltham would have wanted Civilization to do that for him_—masochists not existing or being incredibly rare and unaffordable to him. You looked at this and saw malice everywhere; you couldn't even see _the fictional world_ the author was trying to give you _direct description about_. You didn't say that you disbelieved in the world; you could not see what was being _described_. + +(When a literary critic proposes a "dark" interpretation of an author's world, I think it's implied that they're expressing disbelief in the "intended" world; the fact that I was impudently refusing to buy the benevolent interpretation wasn't because I didn't understand it.) + > Hate-warp like this is bad for truth-perception; my understanding of the situation is that it's harm done to you by the group you say you left. I would read this as being a noninnocent error of that group; that they couldn't get what they wanted from people who still had friends outside their own small microculture, and noninnocently then decided that this outer culture was bad and people needed to be pried loose from it. They tried telling some people that this outer culture was gaslighting them and maliciously lying to them and had to be understood in wholly adversarial terms to break free of the gaslighting; that worked on somebody, and made a new friend for them; so their brain noninnocently learned that it ought to use arguments like that again, so they must be true. > This is a sort of thing I super did not do because I _understood_ it as a failure mode and Laid My Go Stones Against Ever Actually Being A Cult; I armed people with weapons against it, or tried to, but I was optimistic in my hopes about how much could actually be taught. > **zackmdavis** — 11/29/2022 11:20 PM @@ -791,9 +796,9 @@ I was pleased to get the link to Habryka's comment in front of Yudkowsky, if he > **Eliezer** — 11/29/2022 11:31 PM > I think you should worry first about editing the hate-warp out of yourself, but editing the memoir might be useful practice for it. Good night. -It turned out that I was lying about probably not talking in the server anymore. (Hedging the word "probably" didn't make the claim true, and of course I wasn't _consciously_ lying, but that hardly seems exculpatory.) +It turned out that I was lying about probably not talking in the server anymore. (Hedging with the word "probably" didn't make the claim true, and of course I wasn't _consciously_ lying, but that hardly seems exculpatory.) -The next day, I belatedly pointed out that "Keltham thought that not learning about masochists he can never have, was obviously in retrospect what he'd have wanted Civilization to do" seemed to contradict "one thing hasn't changed: the message that you, yourself, should always be trying to infer the true truth". In the first statement, it didn't sound like Keltham thinks it's good that Civilization didn't tell him so that he could figure it how for himself (in accordance with the discipline of "you, yourself, always trying to infer the truth"). It sounded like he was better off not knowing—better off having a _less accurate self-model_ (not having the concept fo "obligate romantic sadism"), better off having a _less accurate world-model_ (thinking that masochism isn't real). +The next day, I belatedly pointed out that "Keltham thought that not learning about masochists he can never have, was obviously in retrospect what he'd have wanted Civilization to do" seemed to contradict "one thing hasn't changed: the message that you, yourself, should always be trying to infer the true truth". In the first statement, it didn't sound like Keltham thinks it's good that Civilization didn't tell him so that he could figure it how for himself (in accordance with the discipline of "you, yourself, always trying to infer the truth"). It sounded like he was better off not knowing—better off having a _less accurate self-model_ (not having the concept of "obligate romantic sadism"), better off having a _less accurate world-model_ (thinking that masochism isn't real). In response to someone positing that dath ilani were choosing to be happier but less accurate predictors, I said that I read a blog post once about why you actually didn't want to do that, linking to [an Internet Archive copy of "Doublethink (Choosing to Be Biased)"](https://web.archive.org/web/20080216204229/https://www.overcomingbias.com/2007/09/doublethink-cho.html) from 2008[^hanson-conceit]—at least, that was _my_ attempted paraphrase; it was possible that I'd extracted a simpler message from it than the author intended. @@ -850,7 +855,7 @@ But those communities ... didn't call themselves _rationalists_, weren't _preten "[The eleventh virtue is scholarship. Study many sciences and absorb their power as your own](https://www.yudkowsky.net/rational/virtues) ... unless a prediction market says that would make you less happy," just didn't have the same ring to it. Neither did "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. But higher than both of those, is trusting your Society's institutions to tell you which kinds of knowledge will make you happy"—even if you stipulated by authorial fiat that your Society's institutions are super-competent, such that they're probably right about the happiness thing. -Attempting to illustrate the mood I thought dath ilan was missing, I quoted the scene from _Atlas Shrugged_ where Dagny expresses a wish to be kept ignorant for the sake of her own happiness and get shut down by Galt—and Dagny _thanks_ him. (I put Discord's click-to-reveal spoiler blocks around plot-relevant sentences—that'll be important in a few moments.) +Attempting to illustrate the mood I thought dath ilan was missing, I quoted the scene from _Atlas Shrugged_ where our heroine Dagny expresses a wish to be kept ignorant for the sake of her own happiness and get shut down by Galt—and Dagny _thanks_ him. (I put Discord's click-to-reveal spoiler blocks around plot-relevant sentences—that'll be important in a few moments.) > "[...] Oh, if only I didn't have to hear about it! If only I could stay here and never know what they're doing to the railroad, and never learn when it goes!" > @@ -858,20 +863,29 @@ Attempting to illustrate the mood I thought dath ilan was missing, I quoted the > > She looked at him, her head lifted, knowing what chance he was rejecting. She thought that no man of the outer world would have said this to her at this moment—she thought of the world's code that worshipped white lies as an act of mercy—she felt a stab of revulsion against that code, suddenly seeing its full ugliness for the first time [...] she answered quietly, "Thank you. You're right." -This (probably predictably) failed to resonate with other server participants, who were baffled why I seemed to be appealing to Ayn Rand's authority. But I was actually going for a _reverse_ appeal-to-authority: if _Ayn Rand_ understood that facing reality is virtuous, why didn't the 2020's "rationalists"? I didn't think the disdain for "Earth people" (again, as if there were any other kind) was justified, when Earth's philosophy of rationality was doing better than dath ilan's on this critical dimension. +This (probably predictably) failed to resonate with other server participants, who were baffled why I seemed to be appealing to Ayn Rand's authority. But I was actually going for a _reverse_ appeal-to-authority: if _Ayn Rand_ understood that facing reality is virtuous, why didn't the 2020's "rationalists"? Wasn't that undignified? I didn't think the disdain for "Earth people" (again, as if there were any other kind) was justified, when Earth's philosophy of rationality (as exemplified by Ayn Rand or Robert ["Get the Facts"](https://www.goodreads.com/quotes/38764-what-are-the-facts-again-and-again-and-again) Heinlein) was doing better than dath ilan's on this critical dimension. But if people's souls had been damaged such that they didn't have the "facing reality is virtuous" gear, it wasn't easy to install the gear by talking at them. Why was I so sure _my_ gear was correct? -I wondered if the issue had to do with what Yudkowsky had identified as [the problem of non-absolute rules](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases#5__Counterargument__The_problem_of_non_absolute_rules_). +I wondered if the issue had to do with what Yudkowsky had [identified as the problem of non-absolute rules](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases#5__Counterargument__The_problem_of_non_absolute_rules_), where not-literally-absolute rules like "Don't kill" or "Don't lie" have to be stated _as if_ they were absolutes in order to register to the human motivational system with sufficient force. -Technically, as a matter of decision theory, "sacred values" are crazy. +Technically, as a matter of decision theory, "sacred values" are crazy. It's easy to say—and feel with the passion of religious conviction—that it's always right to choose Truth and Life, and that no one could choose otherwise except wrongly, in the vile service of Falsehood and Death. But reality presents us with quantitative choices over uncertain outcomes, in which everything trades off against everything else under the [von Neumann–Morgenstern axioms](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem); if you had to choose between a small, unimportant Truth and the Life of millions, you'd probably choose Life—but more importantly, the very fact that you might have to choose, means that Truth and Life can't both be infinitely sacred to you, and must be measured on a common scale with lesser goods like mere Happiness. -[TODO: finish § Atlas Shrugged quote and children's morals - * why sacred values fail - * examples of ways I think reality is complicated that makes the BDSM coverup bad -] +I knew that. The other people in the chatroom knew that. So to the extent that the argument amounted to me saying "Don't lie" (about the existence of masochism), and them saying "Don't lie unless the badness of lying is outweighed by the goodness of increased happiness", why was I so confident that I was in the right, when they were wisely acknowledging the trade-offs under the Law, and I was sticking to my (incoherent) sacred value of Truth? Didn't they obviously have the more sophisticated side of the argument? + +The problem was that, in my view, the people who weren't talking about Truth as if it were a sacred value were being _wildly recklessly casual_ about harms from covering things up, as if they didn't see the non-first-order harms _at all_. I felt I had to appeal to the lessons for children about how Lying Is Bad, because if I tried to make a more sophisticated argument about it being _quantitatively_ crazy to cover up psychology facts that make people sad, I would face a brick wall of "authorial fiat declares that the probabilities and utilities are specifically fine-tuned such that ignorance is good". + +Even if you specified by authorial fiat that "latent sadists could use the information to decide whether or not to try to become rich and famous" didn't tip the utility calculus in itself, [facts are connected to each other](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies), there were _more consequences_ to the coverup, more ways in which better-informed people could make better decisions than worse informed people. + +What about the costs of all the other recursive censorship you'd have to do to keep the secret? (If a biography mentioned masochism in passing along with many other traits of the subject, you'd need to either censor the paragraphs with that detail, or censor the whole book. Those are real costs, even under a soft-censorship regime where people can give special consent to access "Ill Advised" products.) Maybe latent sadists could console themselves with porn if they knew, or devote their careers to making better sex robots, just as people on Earth with non-satisfiable sexual desires manage to get by. (I _knew some things_ about this topic.) What about dath ilan's "heritage optimization" (eugenics) program? Are they going to try to breed more masochists, or fewer sadists, and who's authorized to know that? And so on. + +A user called RationalMoron asked if I was appealing to a terminal value. Did I think people should have accurate self-models even if they don't want to? + +Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???" + +Apparently I struck a nerve. [TODO: Yudkowsky tests me] diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 3070313..78a69b9 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -9,7 +9,7 @@ marked TODO blocks— ✓ scuffle on "Yes Requires the Possibility" [pt. 4] ✓ "Unnatural Categories Are Optimized for Deception" [pt. 4] ✓ Eliezerfic fight: will-to-Truth vs. will-to-happiness [pt. 6] -- Eliezerfic fight: Ayn Rand and children's morals [pt. 6] +✓ Eliezerfic fight: Ayn Rand and children's morals [pt. 6] - regrets, wasted time, conclusion [pt. 6] - "Lesswrong.com is dead to me" [pt. 4] _ AI timelines scam [pt. 4]