-_Was_ it a "political" act for me to write about the cognitive function of categorization on the robot-cult blog with non-gender examples, when gender was secretly ("secretly") my _motivating_ example? In some sense, yes, but the thing you have to realize is—
-
-_Everyone else shot first_. The timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My philosophy-of-language work on the robot-cult blog (April 2019–January 2021) was (stealthily) _in response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks).
-
-I guess by now the branch is as close to settled as it's going to get? Alexander ended up [adding an edit note to the end of "... Not Man to the Categories" in December 2019](https://archive.is/1a4zV#selection-805.0-817.1), and Yudkowsky would go on to clarify his position on the philosophy of language in Facebook posts of [September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228) and [February 2021](https://www.facebook.com/yudkowsky/posts/10159421750419228). So, that's nice.
-
-[TODO: although I think even with the note, in practice, people are going to keep citing "... Not Man for the Categories" in a way that doesn't understand how the note undermines the main point]
-
-But I will confess to being quite disappointed that the public argument-tree evaluation didn't get much further, much faster? The thing you have understand about this whole debate is—
-
-_I need the correct answer in order to decide whether or not to cut my dick off_. As I've said, I _currently_ believe that cutting my dick off would be a _bad_ idea. But that's a cost–benefit judgement call based on many _contingent, empirical_ beliefs about the world. I'm obviously in the general _reference class_ of males who are getting their dicks cut off these days, and a lot of them seem to be pretty happy about it! I would be much more likely to go through with transitioning if I believed different things about the world—if I thought my beautiful pure sacred self-identity thing were a brain-intersex condition, or if I still believed in my teenage psychological-sex-differences denialism (such that there would be _axiomatically_ no worries about fitting with "other" women after transitioning), or if I were more optimistic about the degree to which HRT and surgeries approximate an actual sex change.
-
-In that November 2018 Twitter thread, [Yudkowsky wrote](https://archive.is/y5V9i):
-
-> _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act.
-
-This seems to suggest that gender pronouns in the English language as currently spoken don't have effective truth conditions. I think this is false _as a matter of cognitive science_. If someone told you, "Hey, you should come meet my friend at the mall, she is really cool and I think you'll like her," and then the friend turned out to look like me (as I am now), _you would be surprised_. (Even if people in Berkeley would socially punish you for _admitting_ that you were surprised.) The "she ... her" pronouns would prompt your brain to _predict_ that the friend would appear to be female, and that prediction would be _falsified_ by someone who looked like me (as I am now). Pretending that the social-norms dispute is about chromosomes was a _bullshit_ [weakmanning](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) move on the part of Yudkowsky, [who had once written that](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates[;] [a]rguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates." Thanks to the skills I learned from Yudkowsky's _earlier_ writing, I wasn't dumb enough to fall for it, but we can imagine someone otherwise similar to me who was, who might have thereby been misled into making worse life decisions.
-
-[TODO: ↑ soften tone, be more precise, including about "dumb enough to fall for it"]
-
-If this "rationality" stuff is useful for _anything at all_, you would _expect_ it to be useful for _practical life decisions_ like _whether or not I should cut my dick off_.
-
-In order to get the _right answer_ to that policy question (whatever the right answer turns out to be), you need to _at minimum_ be able to get the _right answer_ on related fact-questions like "Is late-onset gender dysphoria in males an intersex condition?" (answer: no) and related philosophy-questions like "Can we arbitrarily redefine words such as 'woman' without adverse effects on our cognition?" (answer: no).
-
-At the cost of _wasting three years of my life_, we _did_ manage to get the philosophy question mostly right! Again, that's nice. But compared to the [Sequences-era dreams of changing the world](https://www.lesswrong.com/posts/YdcF6WbBmJhaaDqoD/the-craft-and-the-community), it's too little, too slow, too late. If our public discourse is going to be this aggressively optimized for _tricking me into cutting my dick off_ (independently of the empirical cost–benefit trade-off determining whether or not I should cut my dick off), that kills the whole project for me. I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
-
-Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?"
-
-But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_.
-
-If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".)
-
-If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all.
-
-In ["The Ideology Is Not the Movement"](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/) (April 2016), Alexander describes how the content of subcultures typically departs from the ideological "rallying flag" that they formed around. [Sunni and Shia Islam](https://en.wikipedia.org/wiki/Shia%E2%80%93Sunni_relations) originally, ostensibly diverged on the question of who should rightfully succeed Muhammad as caliph, but modern-day Sunni and Shia who hate each other's guts aren't actually re-litigating a succession dispute from the 7th century C.E. Rather, pre-existing divergent social-group tendencies crystalized into distinct tribes by latching on to the succession dispute as a [simple membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests).
-
-Alexander jokingly identifies the identifying feature of our robot cult as being the belief that "Eliezer Yudkowsky is the rightful caliph": the Sequences were a rallying flag that brought together a lot of like-minded people to form a subculture with its own ethos and norms—among which Alexander includes "don't misgender trans people"—but the subculture emerged as its own entity that isn't necessarily _about_ anything outside itself.
+> Some people I usually respect] for their willingness to publicly die on a hill of facts, now seem to be talking as if pronouns are facts, or as if who uses what bathroom is necessarily a factual statement about chromosomes. Come on, you know the distinction better than that!
+>
+> *Even if* somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act.
+>
+> In saying this, I am not taking a stand for or against any Twitter policies. I am making a stand on a hill of meaning in defense of validity, about the distinction between what is and isn't a stand on a hill of facts in defense of truth.
+>
+> I will never stand against those who stand against lies. But changing your name, asking people to address you by a different pronoun, and getting sex reassignment surgery, Is. Not. Lying. You are *ontologically* confused if you think those acts are false assertions.