+An important caveat must be made: [different causal/etiological stories could be compatible with the same _descriptive_ taxonomy.](/2021/Feb/you-are-right-and-i-was-wrong-reply-to-tailcalled-on-causality/) You shouldn't confuse my mere ridicule with a serious and rigorous critique of the strongest possible case for "gender expression deprivation anxiety" as a theoretical entity, which would be more work. But hopefully I've shown _enough_ work here, that the reader can perhaps empathize with the temptation to resort to ridicule?
+
+Everyone's experience is different, but the human mind still has a _design_. If I hurt my ankle while running and I (knowing nothing of physiology or sports medicine) think it might be a stress fracture, a competent doctor (who's studied the literature and seen many more cases) is going to ask followup questions about my experiences to pin down whether it's stress fracture or a sprain. I can't be wrong about the fact _that_ my ankle hurts (that's a privileged first-person experience), but I can easily be wrong about my _theory about_ why my ankle hurts.
+
+Even if human brains vary more than human ankles, the basic epistemological principle applies to a mysterious desire to be female. The question is, do the trans women whose reports I'm considering have a relevantly _different_ psychological condition than me, or do we have "the same" condition, and (at least) one of us is misdiagnosing it?
+
+The _safe_ answer—the answer that preserves everyone's current stories about themselves without any need for modification—is "different." That's what I thought before 2016. I think a lot of trans activists would say "the same". And on _that_ much, we can agree.
+
+How weasely am I being with these "approximately true" and "as a first approximation" qualifiers and hedges? I claim: not _more_ weasely than anyone who tries to reason about psychology given the knowledge and methodology our civilization has managed to accumulate.
+
+Reality has a single level (physics), but [our models of reality have multiple levels](https://www.lesswrong.com/posts/gRa5cWWBsZqdFvmqu/reductive-reference). To get maximally precise predictions about everything, you would have to model the underlying quarks, _&c._, which is impossible. (As [it is](https://www.lesswrong.com/posts/tPqQdLCuxanjhoaNs/reductionism) [written](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression): the map is not the territory, but you can't roll up the territory and put in your glove compartment.)
+
+Psychology is very complicated; every human is their own unique snowflake, but it would be impossible to navigate the world using the "every human is their own unique _maximum-entropy_ snowflake; you can't make _any_ probabilistic inferences about someone's mind based on your experiences with other humans" theory. Even if someone were to _verbally endorse_ something like that—and at age sixteen, I might have—their brain is still going to go on to make predictions inferences about people's minds using _some_ algorithm whose details aren't available to introspection. Much of this predictive machinery is going to be instinct bequeathed by natural selection (because predicting the behavior of conspecifics was very useful in the environment of evolutionary adaptedness), but some of it is the cultural accumulation of people's attempts to organize their experience into categories, clusters, diagnoses, taxons. (The cluster-learning capability is _also_ bequeathed by natural selection, of course, but it's worth distinguishing more "learned" from more "innate" content.)
+
+There could be situations in psychology where a good theory (not a perfect theory, but a good theory to the precision that our theories about engineering bridges are good) would be described by a 70-node causal graph, but it turns out that some of [the more "important" variables in the graph happen to anti-correlate with each other](https://surveyanon.wordpress.com/2019/10/27/the-mathematical-consequences-of-a-toy-model-of-gender-transition/), such that stupid humans who don't know how to discover the correct 70-node graph, do manage to pattern-match their way to a two-type typology that actually is better, as a first approximation, than pretending not to have a theory. No one matches any particular clinical-profile stereotype _exactly_, but [the world makes more sense when you have language for theoretical abstractions](https://astralcodexten.substack.com/p/ontology-of-psychiatric-conditions) like ["comas"](https://slatestarcodex.com/2014/08/11/does-the-glasgow-coma-scale-exist-do-comas/) or "depression" or "bipolar disorder"—or "autogynephilia".
+
+(In some sense it's a matter of "luck" when the relevant structure in the world happens to simplify so much; [friend of the blog](/tag/tailcalled/) Tailcalled argues that [there's no discrete typology for FtM](https://www.reddit.com/r/Blanchardianism/comments/jp9rmn/there_is_probably_no_ftm_typology/) as there is for the two types of MtF, because the various causes of gender problems in females vary more independently and aren't as stratified by age.)
+
+So, if some particular individual trans woman writes down her life story, and swears up and down that she doesn't match the feminine/early-onset type, but _also_ doesn't empathize at all with the experiences I've grouped under the concept of "autogynephilia", I don't have any definitive knockdown proof with which to accuse her of lying, because I don't _know_ her, and the true diversity of human psychology is no doubt richer and stranger than my fuzzy low-resolution model of it.
+
+But [the fuzzy low-resolution model is _way too good_](https://surveyanon.wordpress.com/2019/04/27/predictions-made-by-blanchards-typology/) not to be pointing to _some_ regularity in the real world, and I expect honest people who are exceptions that aren't well-predicted by the model, to at least notice how well it performs on all the _non_-exceptions. If you're a magical third type of trans woman (where, again, _magical_ is a term of art indicating phenomena not understood) who isn't super-feminine but whose identity definitely isn't ultimately rooted in a fetish, [you should be _confused_](https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist) by the 232 upvotes on that /r/MtF comment about the "it's probably just a fetish" camp—if the person who wrote that comment has experiences like yours, why did they ever single out "it's probably just a fetish" [as a hypothesis to pay attention to in the first place](https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis)? And there's allegedly a whole "camp" of these people? What could _that_ possibly be about?!
+
+I _do_ have a _lot_ of uncertainty about what the True Causal Graph looks like, even if it seems obvious that the two-type taxonomy coarsely approximates it. Gay femininity and autogynephilia are obviously very important nodes in the True Graph, but there's going to be more detail to the whole story: what _other_ factors influence people's decision to transition, including [incentives](/2017/Dec/lesser-known-demand-curves/) and cultural factors specific to a given place and time?
+
+Cultural attitudes towards men and maleness have shifted markedly in our feminist era. It feels gauche to say so, but ... as a result, conscientious boys taught to disdain the crimes of men may pick up an internalized misandry? I remember one night at the Univerity in Santa Cruz when I had the insight that it was possible to make generalizations about groups of people while allowing for exceptions (in contrast to my previous stance that generalizations about people were _always morally wrong_)—and immediately, eagerly proclaimed that _men are terrible_.
+
+Or consider computer scientist Scott Aaronson's account (in his infamous [Comment 171](https://www.scottaaronson.com/blog/?p=2091#comment-326664)) that his "recurring fantasy, through this period, was to have been born a woman, or a gay man [...] [a]nything, really, other than the curse of having been born a heterosexual male, which [...] meant being consumed by desires that one couldn't act on or even admit without running the risk of becoming an objectifier or a stalker or a harasser or some other creature of the darkness."
+
+Or there's a piece that makes the rounds on social media occasionally: ["I Am A Transwoman. I Am In The Closet. I Am Not Coming Out"](https://medium.com/@jencoates/i-am-a-transwoman-i-am-in-the-closet-i-am-not-coming-out-4c2dd1907e42), which (in part) discusses the author's frustration at having one's feelings and observations being dismissed on account of being perceived as a cis male. "I hate that the only effective response I can give to 'boys are shit' is 'well I'm not a boy,'" the author laments. And: "Do I even _want_ to convince someone who will only listen to me when they're told by the rules that they have to see me as a girl?"
+
+(The "told by the rules that they have to see me" (!) phrasing in the current revision is _very_ telling; [the originally published version](https://archive.is/trslp) said "when they find out I'm a girl".)
+
+If boys are shit, and the rules say that you have to see someone as a girl if they _say_ they're a girl, that provides an incentive [on the margin](https://www.econlib.org/library/Enc/Marginalism.html) to disidentify with maleness. Like in another one of my teenage song-fragments—
+
+> _Look in the mirror
+> What's a_ white guy _doing there?
+> I'm just a spirit
+> I'm just a spirit
+> Floating in air, floating in air, floating in air!_
+
+This culturally-transmitted attitude could intensify the interpretation of autogynephilic attraction as a [ego-syntonic](https://en.wikipedia.org/wiki/Egosyntonic_and_egodystonic) beautiful pure sacred self-identity thing (rather than an ego-dystonic sex thing to be ashamed of), or be a source of gender dysphoria in males who aren't autogynephilic at all.
+
+To the extent that "cognitive" things like internalized misandry manifesting as cross-gender identification is common (or has _become_ more common in the recent cultural environment), then maybe the two-type taxonomy isn't androphilic/autogynephilic so much as it is androphilic/"not-otherwise-specified": the early-onset type is very behaviorally distinct and has a very straightforward motive to transition (it would be _less_ weird not to); in contrast, it might not be as easy to distinguish autogynephilia from _other_ sources of gender problems in the grab-bag of all males showing up to the gender clinic for any other reason.
+
+Whatever the True Causal Graph looks like—however my remaining uncertainty turns out to resolve in the limit of sufficiently advanced psychological science, I think I _obviously_ have more than enough evidence to reject the mainstream ["inner sense of gender"](https://www.drmaciver.com/2019/05/the-inner-sense-of-gender/) story as _not adding up_.
+
+Okay, so the public narrative about transness is obviously, _obviously_ false. That's a problem, because almost no matter what you want, true beliefs are more useful than false beliefs for making decisions that get you what you want.
+
+Fortunately, Yudkowsky's writing had brought together a whole community of brilliant people dedicated to refining the art of human rationality—the methods of acquiring true beliefs and using them to make decisions that get you what you want. So now that I _know_ the public narrative is obviously false, and that I have the outlines of a better theory (even though I could use a lot of help pinning down the details, and I don't know what the social policy implications are, because the optimal policy computation is a complicated value trade-off), all I _should_ have to do is carefully explain why the public narrative is delusional, and then because my arguments are so much better, all the intellectually serious people will either agree with me (in public), or at least be eager to _clarify_ (in public) exactly where they disagree and what their alternative theory is, so that we can move the state of humanity's knowledge forward together, in order to help the great common task of optimizing the universe in accordance with humane values.
+
+Of course, this is kind of a niche topic—if you're not a male with this psychological condition, or a woman who doesn't want to share all female-only spaces with them, you probably have no reason to care—but there are a _lot_ of males with this psychological condition around here! If this whole "rationality" subculture isn't completely fake, then we should be interested in getting the correct answers in public _for ourselves_.
+
+Men who fantasize about being women do not particularly resemble actual women! We just—don't? This seems kind of obvious, really? _Telling the difference between fantasy and reality_ is kind of an important life skill?! Notwithstanding that some males might want to make use of medical interventions like surgery and hormone replacement therapy to become facsimiles of women as far as our existing technology can manage, and that a free and enlightened transhumanist Society should support that as an option—and notwithstanding that _she_ is obviously the correct pronoun for people who _look_ like women—it's probably going to be harder for people to figure out what the optimal decisions are if no one is allowed to use language like "actual women" that clearly distinguishes the original thing from imperfect facsimiles?!
+
+The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041.
+
+Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) and [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work.
+
+So, a striking thing about my series of increasingly frustrating private conversations and subsequent public Facebook meltdown (the stress from which soon landed me in psychiatric jail, but that's [another](/2017/Mar/fresh-princess/) [story](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)) was the tendency for some threads of conversation to get _derailed_ on some variation of, "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) by Scott Alexander, the _second_ most prominent writer in our robot cult.
+
+So, this _really_ wasn't what I was trying to talk about; _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory in psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words. Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—not just as a formal profession of modesty, but _actually_ wrong in the real world.
+
+But this "I can define the word _woman_ any way I want" mind game? _That_ part was _absolutely_ clear-cut. That part of the argument, I knew I could win. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight, in which Yudkowsky pounded home this _exact_ point _over and over and over again_, that word and category definitions are _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"—
+
+> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)
+
+> ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)
+
+> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)
+
+> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences)
+
+> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations)
+
+> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words)
+
+> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes)
+
+> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace)
+
+So, because I trusted people in my robot cult to be dealing in good faith rather than fucking with me because of their political incentives, I took the bait. I ended up spending three years of my life re-explaining the relevant philosophy-of-language issues in exhaustive, _exhaustive_ detail.
+
+At first I did this in the object-level context of gender on this blog, in ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), and the ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/).
+
+Later, after [Eliezer Yudkowsky joined in the mind games on Twitter in November 2018](https://twitter.com/ESYudkowsky/status/1067183500216811521), I _flipped the fuck out_, and ended up doing more [stictly abstract philosophy-of-language work](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [on](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [the](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception) [robot](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution)-[cult](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) [blog](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception).
+
+An important thing to appreciate is that the philosophical point I was trying to make has _absolutely nothing to do with gender_. In 2008, Yudkowsky had explained (with math!!) that _for all_ nouns N, you can't define _N_ any way you want, because _useful_ definitions need to "carve reality at the joints."
+
+
+[TODO:
+
+I got a little bit of pushback due to the perception
+
+_Everyone else shot first_.
+
+"You can't define a word any way you want" and "You can" are both true in different senses, but if you opportunistically choose which one to emphasize
+
+_I need the right answer in order to decide whether or not to cut my dick off_—if I were dumb enough to believe Yudkowsky's insinuation that pronouns don't have truth conditions, I might have made a worse decision
+
+If rationality is useful for anything, it should be useful for practical life decisions like this
+
+the hypocrisy of "Against Lie Inflation"
+
+(Note that Yudkowsky [would later clarify his position in September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228).)
+
+]
+
+
+
+Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?"
+
+But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_.
+
+If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".)