From b94664c04ab4b15728345f263a47ca1a778172d9 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sun, 1 Dec 2019 09:20:16 -0800 Subject: [PATCH] "I Tell Myself" 1 December drafting session 1 --- ...-end-or-a-hill-of-validity-in-defense-of-meaning.md | 2 +- notes/i-tell-myself-notes.txt | 10 +++++++++- notes/i-tell-myself-sections.md | 8 +++++++- 3 files changed, 17 insertions(+), 3 deletions(-) diff --git a/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md index 4970d78..1a5d139 100644 --- a/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md @@ -52,7 +52,7 @@ The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTK At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back eleven years later (my deconversion from my teenage religion being pretty thorough at this point, I think), the _argument makes sense_ (though you need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other). -On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_, rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ what, in the story, has been lost. +On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_, rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story. [section: reaction to "Changing Emotions"] diff --git a/notes/i-tell-myself-notes.txt b/notes/i-tell-myself-notes.txt index 5759296..6518ae4 100644 --- a/notes/i-tell-myself-notes.txt +++ b/notes/i-tell-myself-notes.txt @@ -516,4 +516,12 @@ to which my attitude is: if your behavior is optimized to respond to political t I think if the so-called "rationality" community is going to not be FRAUDULENT, we should at LEAST be able to publicly clear up the philosophy-of-language mistake (I DON'T expect a community consensus on gender politics; that would be crazy! I JUST expect public consensus on "You can't define a word any way you want", which was not controversial when Eliezer taught us in 2008) -I'm grateful to Ben H./Michael V./Jessica/Sarah for actually helping me, but I feel incredibly betrayed that Scott is playing dumb about philosophy (and doesn't want to talk to our coalition anymore), Eliezer will PRIVATELY admit that he has no objections to my philosophy arguments but is playing dumb about how the things he said in public were incredibly misleading (and doesn't want to talk to our coalition anymore). I have more conversation-bandwidth with Anna because I've been friends with her for 10 years, but Anna doesn't believe in free speech; she'll privately sympathize that it's bad that we're in a situation where political factors are interfering with being able to have an honest public conversation about philosophy, but +I'm grateful to [...] for actually helping me, but I feel incredibly betrayed that Scott is playing dumb about philosophy (and doesn't want to talk to our coalition anymore), Eliezer will PRIVATELY admit that he has no objections to my philosophy arguments but is playing dumb about how the things he said in public were incredibly misleading (and doesn't want to talk to our coalition anymore). I have more conversation-bandwidth with Anna because I've been friends with her for 10 years, but Anna doesn't believe in free speech; she'll privately sympathize that it's bad that we're in a situation where political factors are interfering with being able to have an honest public conversation about philosophy, but + + + +This whole multi-year drama _should_ have been a three-comment conversation. If we were _actually trying_ to do the systematically-correct-reasoning thing + +Random Commenter: Hey, that can't be right—we had a whole Sequence about + +Robot cult leaders: diff --git a/notes/i-tell-myself-sections.md b/notes/i-tell-myself-sections.md index e8d13b3..78c8a1b 100644 --- a/notes/i-tell-myself-sections.md +++ b/notes/i-tell-myself-sections.md @@ -2,7 +2,11 @@ As an illustration of how the hope for radical human enhancement is [fraught wit It would be hard to overstate how much of an impact this post had on me. I've previously linked it on this blog eight times. In June 2008, half a year before it was published, I encountered the [2004 mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that was its predecessor. (The fact that I was trawling through old mailing list archives searching for content by the Great Teacher that I hadn't already read, tells you something about what a fanboy I am.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?" -The argument goes: +The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything work without re-mapping the connections in your motor cortex. + + + + ----- @@ -72,3 +76,5 @@ This is my fault. It's [not like we weren't warned](https://www.lesswrong.com/po ---- + + -- 2.17.1