+> I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.
+
+This is wrong because categories exist in our model of the world _in order to_ capture empirical regularities in the world itself: the map is supposed to _reflect_ the territory, and there _are_ "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. Yudkowsky had written a whole Sequence about this, ["A Human's Guide to Words"](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb). Alexander cites [a post](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside) from that Sequence in support of the (true) point about how categories are "in the map" ... but if you actually read the Sequence, another point that Yudkowsky pounds home over and over, is that word and category definitions are nevertheless _not_ arbitrary: you can't define a word any way you want, because there are [at least 37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)—principles that make some definitions perform better than others as "cognitive technology."
+
+In the case of Alexander's bogus argument about gender categories, the relevant principle ([#30](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) on [the list of 37](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)) is that if you group things together in your map that aren't actually similar in the territory, you're going to make bad inferences.
+
+Crucially, this is a general point about how language itself works that has _nothing to do with gender_. No matter what you believe about politically-controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if [positive consequence]" is not the correct philosophy of language, _independently of the particular values of X and Y_.
+
+This wasn't even what I was trying to talk to people about. _I_ thought I was trying to talk about autogynephilia as an empirical theory of psychology of late-onset gender dysphoria in males, the truth or falsity of which cannot be altered by changing the meanings of words. But at this point, I still trusted people in my robot cult to be basically intellectually honest, rather than slaves to their political incentives, so I endeavored to respond to the category-boundary argument under the assumption that it was an intellectually serious argument that someone could honestly be confused about.
+
+When I took a year off from dayjobbing from March 2017 to March 2018 to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (which Alexander [graciously included in his next links post](https://archive.ph/irpfd#selection-1625.53-1629.55)). A few months later, I followed it with ["Reply to _The Unit of Caring_ on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/), responding to a similar argument from soon-to-be _Vox_ journalist Kelsey Piper, then writing as _The Unit of Caring_ on Tumblr.
+
+I'm proud of those posts. I think Alexander's and Piper's arguments were incredibly dumb, and that with a lot of effort, I did a pretty good job of explaining why to anyone who was interested and didn't, at some level, prefer not to understand.
+
+Of course, a pretty good job of explaining by one niche blogger wasn't going to put much of a dent in the culture, which is the sum of everyone's blogposts; despite the mild boost from the _Slate Star Codex_ links post, my megaphone just wasn't very big. I was disappointed with the limited impact of my work, but not to the point of bearing much hostility to "the community." People had made their arguments, and I had made mine; I didn't think I was entitled to anything more than that.
+
+Really, that should have been the end of the story. Not much of a story at all. If I hadn't been further provoked, I would have still kept up this blog, and I still would have ended up arguing about gender with people sometimes, but this personal obsession wouldn't have been the occasion of a robot-cult religious civil war involving other people whom you'd expect to have much more important things to do with their time.
+
+The _casus belli_ for the religious civil war happened on 28 November 2018. I was at my new dayjob's company offsite event in Austin, Texas. Coincidentally, I had already spent much of the previous two days (since just before the plane to Austin took off) arguing trans issues with other "rationalists" on Discord.
+
+Just that month, I had started a Twitter account using my real name, inspired in an odd way by the suffocating [wokeness of the Rust open-source software scene](/2018/Oct/sticker-prices/) where I [occasionally contributed diagnostics patches to the compiler](https://github.com/rust-lang/rust/commits?author=zackmdavis). My secret plan/fantasy was to get more famous and established in the Rust world (one of compiler team membership, or conference talk accepted, preferably both), get some corresponding Twitter followers, and _then_ bust out the [@BlanchardPhd](https://twitter.com/BlanchardPhD) retweets and links to this blog. In the median case, absolutely nothing would happen (probably because I failed at being famous), but I saw an interesting tail of scenarios in which I'd get to be a test case in [the Code of Conduct wars](https://techcrunch.com/2016/03/05/how-we-may-mesh/).
+
+So, now having a Twitter account, I was browsing Twitter in the bedroom at the rental house for the dayjob retreat when I happened to come across [this thread by @ESYudkowsky](https://twitter.com/ESYudkowsky/status/1067183500216811521):
+
+> Some people I usually respect for their willingness to publicly die on a hill of facts, now seem to be talking as if pronouns are facts, or as if who uses what bathroom is necessarily a factual statement about chromosomes. Come on, you know the distinction better than that!
+>
+> _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act.
+>
+> In saying this, I am not taking a stand for or against any Twitter policies. I am making a stand on a hill of meaning in defense of validity, about the distinction between what is and isn't a stand on a hill of facts in defense of truth.
+>
+> I will never stand against those who stand against lies. But changing your name, asking people to address you by a different pronoun, and getting sex reassignment surgery, Is. Not. Lying. You are _ontologically_ confused if you think those acts are false assertions.
+
+Some of the replies tried to explain the obvious problem—and [Yudkowsky kept refusing to understand](https://twitter.com/ESYudkowsky/status/1067291243728650243):
+
+> Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (chromosomes?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret.
+>
+> Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to.
+
+—[repeatedly](https://twitter.com/ESYudkowsky/status/1067198993485058048):
+
+> You're mistaken about what the word means to you, I demonstrate thus: [https://en.wikipedia.org/wiki/XX_male_syndrome](https://en.wikipedia.org/wiki/XX_male_syndrome)
+>
+> But even ignoring that, you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.
+
+Dear reader, this is the moment where I _flipped out_. Let me explain.
+
+This "hill of meaning in defense of validity" proclamation was such a striking contrast to the Eliezer Yudkowsky I remembered—the Eliezer Yudkowsky I had variously described as having "taught me everything I know" and "rewritten my personality over the internet"—who didn't hesitate to criticize uses of language that he thought were failing to "carve reality at the joints", even going so far as to [call them "wrong"](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong):
+
+> [S]aying "There's no way my choice of X can be 'wrong'" is nearly always an error in practice, whatever the theory. You can always be wrong. Even when it's theoretically impossible to be wrong, you can still be wrong. There is never a Get-Out-Of-Jail-Free card for anything you do. That's life.
+
+[Similarly](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary):
+
+> Once upon a time it was thought that the word "fish" included dolphins. Now you could play the oh-so-clever arguer, and say, "The list: {Salmon, guppies, sharks, dolphins, trout} is just a list—you can't say that a list is _wrong_. I can prove in set theory that this list exists. So my definition of _fish_, which is simply this extensional list, cannot possibly be 'wrong' as you claim."
+>
+> Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list.
+>
+> You come up with a list of things that feel similar, and take a guess at why this is so. But when you finally discover what they really have in common, it may turn out that your guess was wrong. It may even turn out that your list was wrong.
+>
+> You cannot hide behind a comforting shield of correct-by-definition. Both extensional definitions and intensional definitions can be wrong, can fail to carve reality at the joints.
+
+One could argue that this "Words can be wrong when your definition draws a boundary around things that don't really belong together" moral didn't apply to Yudkowsky's new Tweets, which only mentioned pronouns and bathroom policies, not the [extensions](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) of common nouns.
+
+But this seems pretty unsatisfying in the context of Yudkowsky's claim to ["not [be] taking a stand for or against any Twitter policies"](https://twitter.com/ESYudkowsky/status/1067185907843756032). One of the Tweets that had recently led to radical feminist Meghan Murphy getting [kicked off the platform](https://archive.ph/RSVDp) read simply, ["Men aren't women tho."](https://archive.is/ppV86) This doesn't seem like a policy claim; rather, Murphy was using common language to express the fact-claim that members of the natural category of adult human males, are not, in fact, members of the natural category of adult human females.
+
+Thus, if the extension of common words like "woman" and "man" is an issue of epistemic importance that rationalists should care about, then presumably so was Twitter's anti-misgendering policy—and if it _isn't_ (because you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning) then I wasn't sure what was left of the "Human's Guide to Words" Sequence if the [37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) needed to be retracted.
+
+I think I _am_ standing in defense of truth when I have an argument for _why_ my preferred word usage does a better job at carving reality at the joints, and the one bringing my usage explicitly into question does not. As such, I didn't see the practical difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning," and "I can define a word any way I want." About which, again, an earlier Eliezer Yudkowsky had written:
+
+> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)
+>
+> ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+>
+> ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+>
+> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)
+>
+> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)
+>
+> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences)
+>
+> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations)
+>
+> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words)
+>
+> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes)
+>
+> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace)
+
+One could argue that I was unfairly interpreting Yudkowsky's Tweets as having a broader scope than was intended—that Yudkowsky _only_ meant to slap down the false claim that using _he_ for someone with a Y chromosome is "lying", without intending any broader implications about trans issues or the philosophy of language. It wouldn't be realistic or fair to expect every public figure to host an exhaustive debate on all related issues every time they encounter a fallacy they want to Tweet about.
+
+However, I don't think this "narrow" reading is the most natural one. Yudkowsky had previously written of what he called [the fourth virtue of evenness](http://yudkowsky.net/rational/virtues/): "If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider." He had likewise written [on reversed stupidity](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) (bolding mine):
+
+> **To argue against an idea honestly, you should argue against the best arguments of the strongest advocates**. Arguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates.
+
+Relatedly, Scott Alexander had written about how ["weak men are superweapons"](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/): speakers often selectively draw attention to the worst arguments in favor of a position in an attempt to socially discredit people who have better arguments (which the speaker ignores). In the same way, by just slapping down a weak man from the "anti-trans" political coalition without saying anything else in a similarly prominent location, Yudkowsky was liable to mislead his faithful students into thinking that there were no better arguments from the "anti-trans" side.
+
+To be sure, it imposes a cost on speakers to not be able to Tweet about one specific annoying fallacy and then move on with their lives without the need for [endless disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html) about related but stronger arguments that they're not addressing. But the fact that [Yudkowsky disclaimed that](https://twitter.com/ESYudkowsky/status/1067185907843756032) he wasn't taking a stand for or against Twitter's anti-misgendering policy demonstrates that he _didn't_ have an aversion to spending a few extra words to prevent the most common misunderstandings.
+
+Given that, it's hard to read the Tweets Yudkowsky published as anything other than an attempt to intimidate and delegitimize people who want to use language to reason about sex rather than gender identity. For example, deeper in the thread, [Yudkowsky wrote](https://twitter.com/ESYudkowsky/status/1067490362225156096):
+
+> The more technology advances, the further we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts. Who competes in sports segregated around an Aristotelian binary is a policy question (that I personally find very humorous).
+
+Sure, _in the limit of arbitrarily advanced technology_, everyone could be exactly where they wanted to be in sexpsace. Having said this, we have _not_ said all the facts relevant to decisionmaking in our world, where _we do not have arbitrarily advanced technology_ (as Yudkowsky well knew, having [written a post about how technically infeasible an actual sex change would be](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)). As Yudkowsky [acknowledged in the previous Tweet](https://twitter.com/ESYudkowsky/status/1067488844122021888), "Hormone therapy changes some things and leaves others constant." The existence of hormone replacement therapy does not itself take us into the glorious transhumanist future where everyone is the sex they say they are.
+
+The reason for sex-segregated sports leagues is that sport-relevant multivariate trait distributions of female bodies and male bodies are different: men are taller, stronger, and faster. If you just had one integrated league, females wouldn't be competitive (in the vast majority of sports, with a few exceptions [like ultra-distance swimming](https://www.swimmingworldmagazine.com/news/why-women-have-beaten-men-in-marathon-swimming/) that happen to sample an unusually female-favorable corner of sportspace).
+
+Given the empirical reality of the different trait distributions, "Who are the best athletes _among females_?" is a natural question for people to be interested in and want separate sports leagues to determine. Including male people in female sports leagues undermines the point of having a separate female league, and [hormone replacement therapy after puberty](https://link.springer.com/article/10.1007/s40279-020-01389-3) [doesn't substantially change the picture here](https://bjsm.bmj.com/content/55/15/865).[^auto-race-analogy]
+
+[^auto-race-analogy]: Similarly, in automobile races, you want rules to enforce that all competitors have the same type of car, for some commonsense operationalization of "the same type", because a race between a sports car and a [moped](https://en.wikipedia.org/wiki/Moped) would be mostly measuring who has the sports car, rather than who's the better racer.
+
+Yudkowsky's suggestion that an ignorant commitment to an "Aristotelian binary" is the main reason someone might care about the integrity of women's sports is an absurd strawman. This just isn't something any scientifically literate person would write if they had actually thought about the issue at all, as opposed to having first decided (consciously or not) to bolster their reputation among progressives by dunking on transphobes on Twitter, and then wielding their philosophy knowledge in the service of that political goal. The relevant facts are not subtle, even if most people don't have the fancy vocabulary to talk about them in terms of "multivariate trait distributions."
+
+I'm picking on the "sports segregated around an Aristotelian binary" remark because sports is a case where the relevant effect sizes are so large as to make the point [hard for all but the most ardent gender-identity partisans to deny](/2017/Jun/questions-such-as-wtf-is-wrong-with-you-people/). (For example, what the [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d) ≈ [2.6 effect size difference in muscle mass](/papers/janssen_et_al-skeletal_muscle_mass_and_distribution.pdf) means is that a woman as strong as the average man is at the 99.5th percentile for women.) But the point is general: biological sex exists and is sometimes decision-relevant. People who want to be able to talk about sex and make policy decisions on the basis of sex are not making an ontology error, because the ontology in which sex "actually" "exists" continues to make very good predictions in our current tech regime (if not the glorious transhumanist future). It would be a ridiculous [isolated demand for rigor](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) to expect someone to pass a graduate exam about the philosophy and cognitive science of categorization before they can talk about sex.
+
+Thus, Yudkowsky's claim to merely have been standing up for the distinction between facts and policy questions doesn't seem credible. It is, of course, true that pronoun and bathroom conventions are policy decisions rather than matters of fact, but it's bizarre to condescendingly point this out as if it were the crux of contemporary trans-rights debates. Conservatives and gender-critical feminists know that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes! If you _just_ wanted to point out that the rules of sports leagues are a policy question rather than a fact (as if anyone had doubted this), why would you throw in the "Aristotelian binary" weak man and belittle the matter as "humorous"? There are a lot of issues I don't care much about, but I don't see anything funny about the fact that other people _do_ care.[^sports-case-is-funny]
+
+[^sports-case-is-funny]: And in the case of sports, the facts are so lopsided that if we must find humor in the matter, it really goes the other way. A few years later, [Lia Thomas](https://en.wikipedia.org/wiki/Lia_Thomas) would dominate an NCAA women's swim meet by finishing [_4.2 standard deviations_](https://twitter.com/FondOfBeetles/status/1466044767561830405) (!!) earlier than the median competitor, and Eliezer Yudkowsky feels obligated to _pretend not to see the problem?_ You've got to admit, that's a _little_ bit funny.
+
+If any concrete negative consequence of gender self-identity categories is going to be waved away with, "Oh, but that's a mere policy decision that can be dealt with on some basis other than gender, and therefore doesn't count as an objection to the new definition of gender words", then it's not clear what the new definition is _for_.
+
+Like many gender-dysphoric males, I [cosplay](/2016/Dec/joined/) [female](/2017/Oct/a-leaf-in-the-crosswind/) [characters](/2019/Aug/a-love-that-is-out-of-anyones-control/) [at](/2022/Dec/context-is-for-queens/) fandom conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm not very good at it. I think someone looking at some of my cosplay photos and trying to describe their content in clear language—not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory—would say something like, "This is a photo of a man and he's wearing a dress." The word _man_ in that sentence is expressing cognitive work: it's a summary of the [lawful cause-and-effect evidential entanglement](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally observable secondary sex characteristics (facial structure, beard shadow, _&c._). From this evidence, an agent using an [efficient naïve-Bayes-like model](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) can assign me to its "man" (adult human male) category and thereby make probabilistic predictions about traits that aren't directly observable from the photo. The agent would achieve a better [score on those predictions](http://yudkowsky.net/rational/technical/) than if it had assigned me to its "woman" (adult human female) category.
+
+By "traits" I mean not just sex chromosomes ([as Yudkowsky suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the conjunction of dozens or hundreds of measurements that are [causally downstream of sex chromosomes](/2021/Sep/link-blood-is-thicker-than-water/): reproductive organs and muscle mass (again, sex difference effect size of [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d) ≈ 2.6) and Big Five Agreeableness (_d_ ≈ 0.5) and Big Five Neuroticism (_d_ ≈ 0.4) and short-term memory (_d_ ≈ 0.2, favoring women) and white-gray-matter ratios in the brain and probable socialization history and [any number of other things](/papers/archer-the_reality_and_evolutionary_significance_of_human_psychological_sex_differences.pdf)—including differences we might not know about, but have prior reasons to suspect exist. No one _knew_ about sex chromosomes before 1905, but given the systematic differences between women and men, it would have been reasonable to suspect the existence of some sort of molecular mechanism of sex determination.
+
+Forcing a speaker to say "trans woman" instead of "man" in a sentence about my cosplay photos depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. It's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example. But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "man" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure. ("Trans women", two words, are presumably a subcluster within the "women" cluster.) Crowing in the public square about how people who object to being forced to "lie" must be ontologically confused is ignoring the interesting part of the problem. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points).
+
+To this, one might reply that I'm giving too much credit to the "anti-trans" faction for how stupid they're not being: that _my_ careful dissection of the hidden probabilistic inferences implied by words [(including pronoun choices)](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/) is all well and good, but calling pronouns "lies" is not something you do when you know how to use words.
+
+But I'm _not_ giving them credit for for understanding the lessons of "A Human's Guide to Words"; I just think there's a useful sense of "know how to use words" that embodies a lower standard of philosophical rigor. If a person-in-the-street says of my cosplay photos, "That's a man! I have eyes, and I can see that that's a man! Men aren't women!"—well, I probably wouldn't want to invite them to a _Less Wrong_ meetup. But I do think the person-in-the-street is performing useful cognitive work. Because _I_ have the hidden-Bayesian-structure-of-language-and-cognition-sight (thanks to Yudkowsky's writings back in the 'aughts), _I_ know how to sketch out the reduction of "Men aren't women" to something more like "This [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms) detects secondary sex characteristics and uses it as a classifier for a binary female/male 'sex' category, which it uses to make predictions about not-yet-observed features ..."
+
+But having _done_ the reduction-to-cognitive-algorithms, it still looks like the person-in-the-street _has a point_ that I shouldn't be allowed to ignore just because I have 30 more IQ points and better philosophy-of-language skills?
+
+I bring up my bad cosplay photos as an edge case that helps illustrate the problem I'm trying to point out, much like how people love to bring up [complete androgen insensitivity syndrome](https://en.wikipedia.org/wiki/Complete_androgen_insensitivity_syndrome) to illustrate why "But chromosomes!" isn't the correct reduction of sex classification. To differentiate what I'm saying from blind transphobia, let me note that I predict that most people-in-the-street _would_ be comfortable using feminine pronouns for someone like [Blaire White](https://en.wikipedia.org/wiki/Blaire_White). That's evidence about the kind of cognitive work people's brains are doing when they use English pronouns! Certainly, English is not the only language, and ours is not the only culture; maybe there is a way to do gender categories that would be more accurate and better for everyone. But to find what that better way is, we need to be able to talk about these kinds of details in public, and the attitude evinced in Yudkowsky's Tweets seemed to function as a [semantic stopsign](https://www.lesswrong.com/posts/FWMfQKG3RpZx6irjm/semantic-stopsigns) to get people to stop talking about the details.
+
+If you were interested in having a real discussion (instead of a fake discussion that makes you look good to progressives), why would you slap down the "But, but, chromosomes" fallacy and then not engage with the obvious steelman of "But, but, clusters in [high-dimensional](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy-1) [configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) that [aren't actually changeable with contemporary technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)" steelman [which was, in fact, brought up in the replies](https://twitter.com/EnyeWord/status/1068983389716385792)?
+
+Satire is a weak form of argument: the one who wishes to doubt will always be able to find some aspect in which an obviously absurd satirical situation differs from the real-world situation being satirized and claim that that difference destroys the relevance of the joke. But on the off chance that it might help illustrate the objection, imagine you lived in a so-called "rationalist" subculture where conversations like this happened—
+
+<p class="flower-break">⁕ ⁕ ⁕</p>
+
+<div class="dialogue">
+<p><span class="dialogue-character-label">Bob</span>: Look at this <a href="https://www.pexels.com/photo/cute-corgi-in-front-of-a-laptop-5122188/">adorable cat picture</a>!</p>
+<p><span class="dialogue-character-label">Alice</span>: Um, that looks like a dog to me, actually.</p>
+<p><span class="dialogue-character-label">Bob</span>: <a href="https://twitter.com/ESYudkowsky/status/1067198993485058048">You're not standing in defense of truth</a> if you insist on a word, brought explicitly into question, being used with some particular meaning. <a href="https://twitter.com/ESYudkowsky/status/1067294823000887297">Now, maybe as a matter of policy,</a> you want to make a case for language being used a certain way. Well, that's a separate debate then.</p>
+</div>