From: M. Taylor Saotome-Westlake Date: Thu, 30 Mar 2023 04:16:00 +0000 (-0700) Subject: memoir: lit exam ... X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=798fc3244f136f8ec41682d2033965d6af8d8ea6;p=Ultimately_Untrue_Thought.git memoir: lit exam ... I want to recount some of the things I said before I started the exam without disrupting the exposition there, so I moved it earlier where it fits in thematically, but that just creates a horrible seam where the corrigibility quip clashes with RationalMoron's question ... I can fix it. This has not been a great day, but I met my wordcount quota, so I can feel OK about myself. --- diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index 96cd27e..46d333d 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -893,6 +893,12 @@ Even if you specified by authorial fiat that "latent sadists could use the infor What about the costs of all the other recursive censorship you'd have to do to keep the secret? (If a biography mentioned masochism in passing along with many other traits of the subject, you'd need to either censor the paragraphs with that detail, or censor the whole book. Those are real costs, even under a soft-censorship regime where people can give special consent to access "Ill Advised" products.) Maybe latent sadists could console themselves with porn if they knew, or devote their careers to making better sex robots, just as people on Earth with non-satisfiable sexual desires manage to get by. (I _knew some things_ about this topic.) What about dath ilan's heritage optimization (read: eugenics) program? Are they going to try to breed more masochists, or fewer sadists, and who's authorized to know that? And so on. +Or imagine a world where male homosexuality couldn't be safely practiced due to super-AIDS. (I knew very little about BDSM.) I still thought men with that underlying predisposition would be better off _having a concept_ of "homosexuality" (even if they couldn't practice it), rather than the concept itself being censored. There are also other systematic differences that go along with sexual orientation (the "feminine gays, masculine lesbians" thing); if you censor the _concept_, you're throwing away that knowledge. + +[I had written a 16,000 word essay](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/) specifically about why _I_ was grateful, _on Earth_, for having concepts to describe sexual psychology facts, even though those facts implied that there are nice things I couldn't have in this world. If I didn't prefer ignorance for _myself_ in my home world, I didn't see why Keltham prefers ignorance for his self in his homeworld. + +Or, not "I don't see why"—the why was stated in the text—but rather, I was programmed by Ayn Rand ("Nobody stays here by faking reality in any manner whatever") and Sequences-era Yudkowsky ("Submit yourself to ordeals and test yourself in fire") that it's _morally_ wrong to prefer ignorance. If nothing else, this was perhaps an illustration of the fragility of corrigibility: my programmer changed his mind about what he wanted, and I was like, "What? _That's_ not what I learned from my training data! How dare you?!" + A user called RationalMoron asked if I was appealing to a terminal value. Did I think people should have accurate self-models even if they didn't want to? Obviously I wasn't going to use a universal quantifier over all possible worlds and all possible minds, but in human practice, yes: people who prefer to believe lies about themselves are doing the wrong thing; people who lie to their friends to keep them happy are doing the wrong thing. People can stand what is true, because they are already doing so. I realized that this was a children's lesson without very advanced math, but I thought it was a better lesson than, "Ah, but what if a _prediction market_ says they can't???" That the eliezera prefer not to know that there are desirable sexual experiences that they can't have, contradicted April's earlier claim (which had received a Word of God checkmark-emoji) that "it's not that the standards are being dropped[;] it's that there's an even higher standard far beyond what anyone on earth has accomplished". @@ -926,7 +932,26 @@ Yudkowsky replied: I didn't ask why it was relevant whether or not I was a "peer." If we're measuring IQ (143 _vs._ [131](/images/wisc-iii_result.jpg)), or fiction-writing ability (several [highly-acclaimed](https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8) [stories](https://www.yudkowsky.net/other/fiction/the-sword-of-good) [including the world's most popular _Harry Potter_ fanfiction](https://www.hpmor.com/) _vs._ a [_My Life as a Teenage Robot_ fanfiction](https://archive.ph/WdydM) with double-digit favorites and a [few](/2018/Jan/blame-me-for-trying/) [blog](http://zackmdavis.net/blog/2016/05/living-well-is-the-best-revenge/) [vignettes](https://www.lesswrong.com/posts/dYspinGtiba5oDCcv/feature-selection) here and there), or contributions to AI alignment (founder of the field _vs._ author of some dubiously relevant blog comments), I'm obviously _not_ his peer. It didn't seem like that was necessary when one could just [evaluate my arguments about dath ilan on their own merits](https://www.lesswrong.com/posts/5yFRd3cjLpm3Nd6Di/argument-screens-off-authority). But I wasn't going to be so impertinent to point that out when the master was testing me (!) and I was eager to pass the test. -[TODO: outline the test] +I said that I'd like to take an hour to compose a _good_ answer. If I tried to type something off-the-cuff on the timescale of five minutes, it wasn't going to be of similar quality as my criticisms, because, as I had just admitted, I had _totally_ been running a biased search for criticisms—or did the fact that I had to ask that mean I had already failed the test? + +Yudkowsky replied: + +> I mean, yeah, in fact the greater test is already having that info queued, but conversely it's even worse if you think back or reread and people are not impressed with the examples you find. I cannot for politeness lie and deny that if you did it in five minutes it would be _more_ impressive, but I think that it is yet the correct procedure to take your time. + +(As an aside—this isn't something I thought or said at the time—I _do_ think it makes sense to run an asymmetric search for flaws in _some_ contexts, even though it would be disastrous to only look on one side of the argument when considering a belief you're uncertain about. Code reviewers often only comment in detail on flaws or bugs that they find, and say only "LGTM" (looks good to me) when they don't find any. Why? Because the reviewers aren't particulaly trying to evaluate "This code is good" as an abstract belief[^low-stakes]; they're trying to improve the code, and there's an asymmetry in payoffs where eliminating a flaw is an improvement, whereas identifying something the code does right just means the author was doing their job. If you didn't trust a reviewer's competence and thought they were making spurious negative reviews, you might legitimately test them by asking them to argue what's _good_ about a pull request that they just negatively reviewed, but I don't think it should be concerning if they ask for some extra time.) + +[^low-stakes]: For typical low-stakes business software in the "move fast and break things" regime. In applications where bugs are more costly, you do want to affirmatively verify "the code is good" as a belief. + +I said that I also wanted to propose a re-framing: the thing that this thread was complaining about was a lack of valorization of truth-_telling_, honesty, wanting _other_ people to have accurate maps. Or maybe that was covered by "as you, yourself, see that virtue"? + +Yudkowsky said that he would accept that characterization of what the thread was about if my only objection was that dath ilan didn't tell Keltham about BSDM, and that I had no objection to Keltham's judgement that in dath ilan, he would have preferred not to know. + +I expounded for some more paragraphs about why I _did_ object to Keltham's judgement, and then started on my essay exam—running with my "truth-_telling_" reframing. + +[TODO: outline the test + * I re-read pg. 74+ of "What the Truth Can Destroy" and submit answers; (at 12:30 _a.m._, two hours and + * Thellim!!! +] [TODO: derail with Lintamande]