+But within a couple years, I also developed this beautiful pure sacred self-identity thing, where I was also having a lot of non-sexual thoughts about being a girl. Just—little day-to-day thoughts. Like when I would write in my pocket notebook as my female analogue. Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _y_, _z_)](/images/handwritten_phrase_jazzy_puppy.jpg) because I thought my handwriting look more feminine. [TODO: another anecdote, clarify notebook]
+
+The beautiful pure sacred self-identity thing doesn't _feel_ explicitly erotic.
+
+[section: some sort of causal relationship between self-identity and erotic thing, but I assumed it was just my weird thing, not "trans", which I had heard of; never had any reason to formulate the hypothesis, "dysphoria"]
+
+[section: another thing about me: my psychological sex differences denialism]
+
+[section: Overcoming Bias rewrites my personality over the internet; gradually getting over sex differences denialism]
+
+The short story ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2) portrays an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because [evolution didn't design women and men to be optimal partners for each other](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), and the AI is prohibited from editing people's minds, the happiness-maximizing solution ends up splitting up the human species by sex and giving women and men their own _separate_ utopias, complete with artificially-synthesized romantic partners.
+
+At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at the idea in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back eleven years later (my deconversion from my teenage religion being pretty thorough at this point, I think), the _argument makes sense_ (though you need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other).
+
+On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_, rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story.
+
+[section: reaction to "Changing Emotions"]
+
+[section: moving to Berkeley, realized that my thing wasn't different; seemed like something that a systematically-correct-reasoning community would be interested in getting right (maybe the 30% of the ones with penises are actually women thing does fit here after all? (I was going to omit it)]
+
+[section: had a lot of private conversations with people, and they weren't converging with me]
+
+[section: flipped out on Facebook; those discussions ended up getting derailed on a lot of appeal-to-arbitrariness conversation halters, appeal to "Categories Were Made"]
+
+[section: quit my job for gender-blogging]
+
+[...]
+
+So, I think this is a bad argument. But specifically, it's a bad argument for _completely general reasons that have nothing to do with gender_. And more specifically, completely general reasons that have been explained in exhaustive, _exhaustive_ detail in _our own foundational texts_.
+
+In 2008, the Great Teacher had this really amazing series of posts explaining the hidden probability-theoretic structure of language and cognition. Essentially, explaining _natural language as an AI capability_. What your brain is doing when you [see a tiger and say, "Yikes! A tiger!"](https://www.lesswrong.com/posts/dMCFk2n2ur8n62hqB/feel-the-meaning) is governed the [simple math](https://www.lesswrong.com/posts/HnPEpu5eQWkbyAJCT/the-simple-math-of-everything) by which intelligent systems make observations, use those observations to assign category-membership, and use category-membership to make predictions about properties which have not yet been observed. _Words_, language, are an information-theoretically efficient _code_ for such systems to share cognitive content.
+
+And these posts hammered home the point over and over and over and _over_ again—culminating in [the 37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)—that word and category definitions are _not_ arbitrary, because there are optimality criteria that make some definitions _perform better_ than others as "cognitive technology"—
+
+> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)
+
+> ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)
+
+> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)
+
+> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences)
+
+> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations)
+
+> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words)
+
+> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes)
+
+> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace)
+
+Similarly, the Popular Author himself has written extensively about [the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world), which he called _the worst argument in the world_:
+
+[...]
+
+You see the problem. If "You can't define a word any way you want" is a good philosophy lesson, it should be a good philosophy lesson _independently_ of the particular word in question and _independently_ of the current year. If we've _learned something new_ about the philosophy of language in the last ten years, that's _really interesting_ and I want to know what it is!