+But within a couple years, I also developed this beautiful pure sacred self-identity thing, where I was also having a lot of non-sexual thoughts about being a girl. Just—little day-to-day thoughts. Like when I would write in my pocket notebook as my female analogue. Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _y_, _z_)](/images/handwritten_phrase_jazzy_puppy.jpg) because I thought my handwriting look more feminine. [TODO: another anecdote, clarify notebook]
+
+The beautiful pure sacred self-identity thing doesn't _feel_ explicitly erotic.
+
+[section: some sort of causal relationship between self-identity and erotic thing, but I assumed it was just my weird thing, not "trans", which I had heard of; never had any reason to formulate the hypothesis, "dysphoria"]
+
+[section: another thing about me: my psychological sex differences denialism]
+
+[section: Overcoming Bias rewrites my personality over the internet; gradually getting over sex differences denialism]
+
+The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2) portrays an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because [evolution didn't design women and men to be optimal partners for each other](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), and the AI is prohibited from editing people's minds, the happiness-maximizing solution ends up splitting up the human species by sex and giving women and men their own _separate_ utopias, complete with artificially-synthesized romantic partners.
+
+At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back eleven years later (my deconversion from my teenage religion being pretty thorough at this point, I think), the _argument makes sense_ (though you need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other).
+
+On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_, rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story.
+
+Another post in this vein that had a huge impact on me was ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). As an illustration of how [the hope for radical human enhancement is fraught with](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard) technical difficulties, the Great Teacher sketches a picture of just how difficult an actual male-to-female sex change would be.
+
+It would be hard to overstate how much of an impact this post had on me. I've previously linked it on this blog eight times. In June 2008, half a year before it was published, I encountered the [2004 mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that was its predecessor. (The fact that I was trawling through old mailing list archives searching for content by the Great Teacher that I hadn't already read, tells you something about what a fanboy I am.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?"
+
+The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned—
+
+> Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.
+
+But from the standpoint of my secret erotic fantasy, this is actually a _great_ outcome.
+
+[...]
+
+> If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her "me". The change is too sharp, if it happens all at once.
+
+In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions/comment/4pttT7gQYLpfqCsNd)—
+
+> Is it cheating if you deliberately define your personal identity such that the answer is _No_?
+
+(To which I now realize the correct answer is: Yes, it's fucking cheating! The map is not the territory! You can't change the current _referent_ of "personal identity" with the semantic mind game of declaring that "personal identity" now refers to something else! How dumb do you think we are?! But more on this later.)
+
+[section: "50% of the ones with penises", moving to Berkeley, realized that my thing wasn't different; seemed like something that a systematically-correct-reasoning community would be interested in getting right (maybe the 30% of the ones with penises are actually women thing does fit here after all? (I was going to omit it)]
+
+[section: had a lot of private conversations with people, and they weren't converging with me]
+
+[section: flipped out on Facebook; those discussions ended up getting derailed on a lot of appeal-to-arbitrariness conversation halters, appeal to "Categories Were Made"]
+
+So, I think this is a bad argument. But specifically, it's a bad argument for _completely general reasons that have nothing to do with gender_. And more specifically, completely general reasons that have been explained in exhaustive, _exhaustive_ detail in _our own foundational texts_—including some material that I _know_ the Popular Author is intimately familiar with, because _he fucking wrote it_.
+
+[section: noncentral-fallacy / motte-and-bailey stuff, other posts about making predictions https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world ]
+
+The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [the](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/) Popular Author—you _actually know the math_.
+
+If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, I can use the observation of its shape to update my beliefs about its blegg-category-membership, and then use my beliefs about category-membership to update my beliefs about its blueness. This cognitive algorithm is useful if we live in a world where objects that have the appropriate statistical structure—if the joint distribution P(blegg, blueness, eggness) approximately factorizes as P(blegg)·P(blueness|blegg)·P(eggness|blegg).
+
+"Category boundaries" are just a _visual metaphor_ for the math: the set of things I'll classify as a blegg with probability greater than _p_ is conveniently _visualized_ as an area with a boundary in blueness–eggness space. If you _don't understand_ the relevant math and philosophy—or are pretending not to understand only and exactly when it's politically convenient—you might think you can redraw the boundary any way you want, but you can't, because the "boundary" visualization is _derived from_ a statistical model which corresponds to _empirically testable predictions about the real world_. Fucking with category boundaries corresponds to fucking with the model, which corresponds to fucking with your ability to interpret sensory data. The only two reasons you could _possibly_ want to do this would be to wirehead yourself (corrupt your map to make the territory look nicer than it really is, making yourself _feel_ happier at the cost of sabotaging your ability to navigate the real world) or as information warfare (corrupt shared maps to sabotage other agents' ability to navigate the real world, in a way such that you benefit from their confusion).
+
+[section: started a pseudonymous secret blog; one of the things I focused on was the philosophy-of-language thing, because that seemed _really_ nailed down: "...To Make Predictions" was the crowning achievement of my sabbatical, and I was also really proud of "Reply on Adult Human Females" a few months later. And that was going OK, until ...]
+
+[section: hill of meaning in defense of validity, and I _flipped the fuck out_]
+
+In 2008, the Great Teacher had this really amazing series of posts explaining the hidden probability-theoretic structure of language and cognition. Essentially, explaining _natural language as an AI capability_. What your brain is doing when you [see a tiger and say, "Yikes! A tiger!"](https://www.lesswrong.com/posts/dMCFk2n2ur8n62hqB/feel-the-meaning) is governed the [simple math](https://www.lesswrong.com/posts/HnPEpu5eQWkbyAJCT/the-simple-math-of-everything) by which intelligent systems make observations, use those observations to assign category-membership, and use category-membership to make predictions about properties which have not yet been observed. _Words_, language, are an information-theoretically efficient _code_ for such systems to share cognitive content.
+
+And these posts hammered home the point over and over and over and _over_ again—culminating in [the 37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)—that word and category definitions are _not_ arbitrary, because there are optimality criteria that make some definitions _perform better_ than others as "cognitive technology"—
+
+> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)
+
+> ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)
+
+> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)
+
+> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences)
+
+> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations)
+
+> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words)
+
+> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes)
+
+> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace)
+
+[...]
+
+You see the problem. If "You can't define a word any way you want" is a good philosophy lesson, it should be a good philosophy lesson _independently_ of the particular word in question and _independently_ of the current year. If we've _learned something new_ about the philosophy of language in the last ten years, that's _really interesting_ and I want to know what it is!