+This is wrong because categories exist in our model of the world _in order to_ capture empirical regularities in the world itself: the map is supposed to _reflect_ the territory, and there _are_ "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. [Yudkowsky wrote a whole Sequence about this](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb) back in 'aught-eight, as part of the original Sequences. Alexander cites [a post](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside) from that Sequence in support of the (true) point about how categories are "in the map" ... but if you actually read the Sequence, another point that Yudkowsky pounds home _over and over and over again_, is that word and category definitions are nevertheless _not_ arbitrary: you can't define a word any way you want, because there are [at least 37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)—principles that make some definitions _perform better_ than others as "cognitive technology."
+
+In the case of Alexander's bogus argument about gender categories, the relevant principle ([#30](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) on [the list of 37](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)) is that if group things together in your map that aren't actually similar in the territory, you're going to be misled into making bad predictions.
+
+Importantly, this is a very general point about how language itself works _that has nothing to do with gender_. No matter what you believe about politically-controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if [positive consequence]" is not the correct philosophy of language, _independently of the particular values of X and Y_.
+
+Also, this ... really wasn't what I was trying to talk about. _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory of psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words. But at this point I still trusted people in my robot cult to be basically intellectually honest, rather than fucking with me because of their political incentives, so I endeavored to respond to the category-boundary argument as if it were a serious argument: when I quit my dayjob in March 2017 in order to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (which Alexander [graciously included in his next links post](https://archive.ph/irpfd#selection-1625.53-1629.55)). A few months later, I followed it up with ["Reply to _The Unit of Caring_ on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/), responding to a similar argument. I'm proud of those posts: I think Alexander's and _Unit of Caring_'s arguments were incredibly dumb, and with a lot of effort, I think I did a pretty good job of explaining exactly why to anyone who was interested and didn't, at some level, prefer not to understand.
+
+Of course, a pretty good job of explaining by one niche blogger wasn't going to put much of a dent in the culture, which is the sum of everyone else's blogposts; despite the mild boost from the _Slate Star Codex_ links post, my megaphone just wasn't very big. At this point, I was _disappointed_ with the limited impact of my work, but not to the point of bearing much hostility to "the community". People had made their arguments, and I had made mine; I didn't think I was _entitled_ to anything more than that.
+
+... and, really, that _should_ have been the end of the story. Not much of a story at all. If I hadn't been further provoked, I would have still kept up this blog, and I still would have ended up arguing about gender with people occasionally, but this personal obsession of mine wouldn't have been the occasion of a full-on robot-cult religious civil war.
+
+The _causis belli_ for the religious civil war happened on 28 November 2018. I was at my new dayjob's company offsite event in Austin. Coincidentally, I had already spent much of the afternoon arguing trans issues with other "rationalists" on Discord. [TODO: review Discord logs; email to Dad suggests that offsite began on the 26th, contrasted to first shots on the 28th]
+
+Just that month, I had started a Twitter account in my own name, inspired in an odd way by the suffocating [wokeness of the open-source software scene](/2018/Oct/sticker-prices/) where I [occasionally contributed diagnostics patches to the compiler](https://github.com/rust-lang/rust/commits?author=zackmdavis). My secret plan/fantasy was to get more famous/established in the that world (one of compiler team membership, or conference talk accepted, preferably both), get some corresponding Twitter followers, and _then_ bust out the Blanchard retweets and links to this blog. In the median case, absolutely nothing would happen (probably because I failed at being famous), but I saw an interesting tail of scenarios in which I'd get to be a test case in the Code of Conduct wars.
+
+[TODO: SECTION SUMMARIZING AND RESPONDING TO "Hill of Validity" Tweet thread]
+
+I was physically shaking. I remember going downstairs to confide in a senior engineer about the situation. But if Yudkowsky was _already_ stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (And I hadn't intended to talk about gender on that account yet, although that seemed less important given the present crisis.)
+
+It seemed better to try to clear this up in private. I still had Yudkowsky's email address. I felt bad bidding for his attention over my gender thing _again_—but I had to do _something_. Hands trembling, I sent him an email asking him to read my ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), suggesting that it may qualify as an answer to his question about ["a page [he] could read to find a non-confused exclamation of how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232), and that, because I cared very much about correcting what I claim are confusions in my rationalist subculture, that I would be happy to pay up to $1000 for his time, and that, if he liked the post, he might consider Tweeting a link.
+
+[TODO (Subject: "another offer, $1000 to read a ~6500 word blog post about (was: Re: Happy Price offer for a 2 hour conversation)")]
+
+The monetary offer, admittedly, was awkward: I included another paragraph clarifying that any payment was only to get his attention, and not _quid quo pro_ advertising, and that if he didn't trust his brain circuitry not to be corrupted by money, then he might want to reject the offer on those grounds and only read the post if he expected it to be genuinely interesting.
+
+Again, I realize this must seem weird and cultish to any normal people reading this. (Paying some blogger you follow one grand just to _read_ on of your posts? Who _does_ that?) To this, I again refer to [the reasons justifying my 2016 cheerful price offer](/2022/TODO/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price-reasons)—and that it was a way to signal that I _really really didn't want to be ignored_.
+
+[TODO: and cc'd Michael and Anna as character references]
+
+[TODO: I did successfully interest Michael, who chimed in—Michael called me up and we talked about how the "rationalists" were over]
+
+[TODO: as with earlier cheerful price offer, I can't confirm or deny whether Yudkowsky replied or whether he accepted the cheerful price offer]