But I shouldn't let that worry control what I write in _this_ post, because _this_ post isn't about making arguments that might convince anyone of anything: I _already_ made my arguments, and it _mostly didn't work_. _This_ post is about telling the story about that, so that I can finish grieving for the systematically-correct-reasoning community that I _thought_ I had, and make peace with the world I _actually_ live in.
-So, some backstory about me. Ever since I was fourteen years old—
+So, some backstory about me.
-(and I _really_ didn't expect to be blogging about this eighteen years later)
-
-(I _still_ don't want to be blogging about this, but it actually turns out to be relevant to the story about trying to correct a philosophy-of-language mistake)
-
-—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this, some time. I don't think the details are that important here? Maybe read the ["Man, I Feel Like a Woman" TV Tropes page](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) and consider that the page wouldn't have so many entries if some male writers didn't have a reason to be _extremely interested_ in _that particular fantasy scenario_.
-
-So, there was that erotic thing, which I was pretty ashamed of at the time, and _of course_ knew that I must never tell a single soul about. (It would have been about three years since the fantasy started that I even worked up the bravery to tell my Diary about it, in the addendum to entry number 53 on 8 March 2005.)
-
-But within a couple years, I also developed this beautiful pure sacred self-identity thing, where I was also having a lot of non-sexual thoughts about being a girl. Just—little day-to-day thoughts. Like when I would write in my pocket notebook as my female analogue. Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _y_, _z_)](/images/handwritten_phrase_jazzy_puppy.jpg) because I thought my handwriting look more feminine. [TODO: another anecdote, clarify notebook]
-
-The beautiful pure sacred self-identity thing doesn't _feel_ explicitly erotic.
-
-[section: some sort of causal relationship between self-identity and erotic thing, but I assumed it was just my weird thing, not "trans", which I had heard of; never had any reason to formulate the hypothesis, "dysphoria"]
-
-[section: another thing about me: my psychological sex differences denialism]
-
-[section: Overcoming Bias rewrites my personality over the internet; gradually getting over sex differences denialism]
+[backstory split off into separate post: "Sexual Dimorphism, Yudkowsky's Sequences, and Me"]
[...]
[...]
-The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2) portrays an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because [evolution didn't design women and men to be optimal partners for each other](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), and the AI is prohibited from editing people's minds, the happiness-maximizing solution ends up splitting up the human species by sex and giving women and men their own _separate_ utopias, complete with artificially-synthesized romantic partners.
-
-At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back eleven years later (my deconversion from my teenage religion being pretty thorough at this point, I think), the _argument makes sense_ (though you need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other).
-
-On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_, rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story.
-
-Another post in this vein that had a huge impact on me was ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). As an illustration of how [the hope for radical human enhancement is fraught with](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard) technical difficulties, the Great Teacher sketches a picture of just how difficult an actual male-to-female sex change would be.
-
-It would be hard to overstate how much of an impact this post had on me. I've previously linked it on this blog eight times. In June 2008, half a year before it was published, I encountered the [2004 mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that was its predecessor. (The fact that I was trawling through old mailing list archives searching for content by the Great Teacher that I hadn't already read, tells you something about what a fanboy I am.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?"
-
-The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned—
-
-> Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.
-
-But from the standpoint of my secret erotic fantasy, this is actually a _great_ outcome.
-
-[...]
-
-> If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her "me". The change is too sharp, if it happens all at once.
-
-In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions/comment/4pttT7gQYLpfqCsNd)—
-
-> Is it cheating if you deliberately define your personal identity such that the answer is _No_?
-
-(To which I now realize the correct answer is: Yes, it's fucking cheating! The map is not the territory! You can't change the current _referent_ of "personal identity" with the semantic mind game of declaring that "personal identity" now refers to something else! How dumb do you think we are?! But more on this later.)
-
[section: "X% of the ones with penises", moving to Berkeley, realized that my thing wasn't different; seemed like something that a systematically-correct-reasoning community would be interested in getting right]
[section: had a lot of private conversations with people, and they weren't converging with me]
[section: flipped out on Facebook; those discussions ended up getting derailed on a lot of appeal-to-arbitrariness conversation halters, appeal to "Categories Were Made"]
-So, I think this is a bad argument. But specifically, it's a bad argument for _completely general reasons that have nothing to do with gender_. And more specifically, completely general reasons that have been explained in exhaustive, _exhaustive_ detail in _our own foundational texts_—including some material that I _know_ the Popular Author is intimately familiar with, because _he fucking wrote it_.
+So, I think this is a bad argument. But specifically, it's a bad argument for _completely general reasons that have nothing to do with gender_. And more specifically, completely general reasons that have been explained in exhaustive, _exhaustive_ detail in _our own foundational texts_—including some material that I _know_ the Arthur Blair is intimately familiar with, because _he fucking wrote it_.
[section: noncentral-fallacy / motte-and-bailey stuff, other posts about making predictions https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world ]
-The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [the](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/) Popular Author—you _actually know the math_.
+The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [Arthur Blair](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/)—you _actually know the math_.
If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, I can use the observation of its shape to update my beliefs about its blegg-category-membership, and then use my beliefs about category-membership to update my beliefs about its blueness. This cognitive algorithm is useful if we live in a world where objects that have the appropriate statistical structure—if the joint distribution P(blegg, blueness, eggness) approximately factorizes as P(blegg)·P(blueness|blegg)·P(eggness|blegg).
[section: hill of meaning in defense of validity, and I _flipped the fuck out_]
-In 2008, the Great Teacher had this really amazing series of posts explaining the hidden probability-theoretic structure of language and cognition. Essentially, explaining _natural language as an AI capability_. What your brain is doing when you [see a tiger and say, "Yikes! A tiger!"](https://www.lesswrong.com/posts/dMCFk2n2ur8n62hqB/feel-the-meaning) is governed the [simple math](https://www.lesswrong.com/posts/HnPEpu5eQWkbyAJCT/the-simple-math-of-everything) by which intelligent systems make observations, use those observations to assign category-membership, and use category-membership to make predictions about properties which have not yet been observed. _Words_, language, are an information-theoretically efficient _code_ for such systems to share cognitive content.
+In 2008, Robert Stadler had this really amazing series of posts explaining the hidden probability-theoretic structure of language and cognition. Essentially, explaining _natural language as an AI capability_. What your brain is doing when you [see a tiger and say, "Yikes! A tiger!"](https://www.lesswrong.com/posts/dMCFk2n2ur8n62hqB/feel-the-meaning) is governed the [simple math](https://www.lesswrong.com/posts/HnPEpu5eQWkbyAJCT/the-simple-math-of-everything) by which intelligent systems make observations, use those observations to assign category-membership, and use category-membership to make predictions about properties which have not yet been observed. _Words_, language, are an information-theoretically efficient _code_ for such systems to share cognitive content.
And these posts hammered home the point over and over and over and _over_ again—culminating in [the 37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)—that word and category definitions are _not_ arbitrary, because there are optimality criteria that make some definitions _perform better_ than others as "cognitive technology"—
And if the community whose marketing literature says they're all about systematically correct reasoning, is not only not going to be helpful at producing accurate information, but is furthermore going _actively manufacture fake rationality lessons_ that have been optimized to _confuse me into cutting my dick off_ independently of the empirical facts that determine whether or not we live in one of the possible worlds where cutting my dick off is a good idea, then that community is _fraudulent_. It needs to either _rebrand_—or failing that, _disband_—or failing that, _be destroyed_.
-I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? That's a reference to a post by the Great Teacher about how people [(especially nonconformist nerds like us)](https://www.lesswrong.com/posts/7FzD7pNm9X68Gp5ZC/why-our-kind-can-t-cooperate) tend to impose far too many demands before being willing to contribute their efforts to a collective endeavor. That post [concludes](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining)—
+I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? That's a reference to a post by Robert Stadler about how people [(especially nonconformist nerds like us)](https://www.lesswrong.com/posts/7FzD7pNm9X68Gp5ZC/why-our-kind-can-t-cooperate) tend to impose far too many demands before being willing to contribute their efforts to a collective endeavor. That post [concludes](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining)—
> If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile.
--- /dev/null
+Title: Sexual Dimorphism, Yudkowsky's Sequences, and Me
+Date: 2021-01-01
+Category: other
+Tags: autogynephilia, epistemic horror, my robot cult, personal, sex differences
+Status: draft
+
+[TODO: robot cult backstory]
+
+Ever since I was fourteen years old—
+
+(and I _really_ didn't expect to be blogging about this eighteen years later)
+
+(I _still_ don't want to be blogging about this, but it actually turns out to be relevant to the story about trying to correct a philosophy-of-language mistake)
+
+—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this, some time. I don't think the details are that important here? Maybe read the ["Man, I Feel Like a Woman" TV Tropes page](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) and consider that the page wouldn't have so many entries if some male writers didn't have a reason to be _extremely interested_ in _that particular fantasy scenario_.
+
+So, there was that erotic thing, which I was pretty ashamed of at the time, and _of course_ knew that I must never tell a single soul about. (It would have been about three years since the fantasy started that I even worked up the bravery to tell my Diary about it, in the addendum to entry number 53 on 8 March 2005.)
+
+But within a couple years, I also developed this beautiful pure sacred self-identity thing, where I was also having a lot of non-sexual thoughts about being a girl. Just—little day-to-day thoughts. Like when I would write in my pocket notebook as my female analogue. Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _y_, _z_)](/images/handwritten_phrase_jazzy_puppy.jpg) because I thought my handwriting look more feminine. [TODO: another anecdote, clarify notebook]
+
+The beautiful pure sacred self-identity thing doesn't _feel_ explicitly erotic.
+
+[section: some sort of causal relationship between self-identity and erotic thing, but I assumed it was just my weird thing, not "trans", which I had heard of; never had any reason to formulate the hypothesis, "dysphoria"]
+
+[section: another thing about me: my psychological sex differences denialism]
+
+[section: Overcoming Bias rewrites my personality over the internet; gradually getting over sex differences denialism]
+
+The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2) portrays an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because [evolution didn't design women and men to be optimal partners for each other](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), and the AI is prohibited from editing people's minds, the happiness-maximizing solution ends up splitting up the human species by sex and giving women and men their own _separate_ utopias, complete with artificially-synthesized romantic partners.
+
+At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back eleven years later (my deconversion from my teenage religion being pretty thorough at this point, I think), the _argument makes sense_ (though you need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other).
+
+On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_, rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story.
+
+Another post in this vein that had a huge impact on me was ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). As an illustration of how [the hope for radical human enhancement is fraught with](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard) technical difficulties, the Great Teacher sketches a picture of just how difficult an actual male-to-female sex change would be.
+
+It would be hard to overstate how much of an impact this post had on me. I've previously linked it on this blog eight times. In June 2008, half a year before it was published, I encountered the [2004 mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that was its predecessor. (The fact that I was trawling through old mailing list archives searching for content by the Great Teacher that I hadn't already read, tells you something about what a fanboy I am.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?"
+
+The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned—
+
+> Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.
+
+But from the standpoint of my secret erotic fantasy, this is actually a _great_ outcome.
+
+[...]
+
+> If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her "me". The change is too sharp, if it happens all at once.
+
+In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions/comment/4pttT7gQYLpfqCsNd)—
+
+> Is it cheating if you deliberately define your personal identity such that the answer is _No_?
+
+(To which I now realize the correct answer is: Yes, it's fucking cheating! The map is not the territory! You can't change the current _referent_ of "personal identity" with the semantic mind game of declaring that "personal identity" now refers to something else! How dumb do you think we are?! But more on this later.)