-This seems to suggest that gender pronouns in the English language as currently spoken don't have effective truth conditions. I think this is false _as a matter of cognitive science_. If someone told you, "Hey, you should come meet my friend at the mall, she is really cool and I think you'll like her," and then the friend turned out to look like me (as I am now), _you would be surprised_. (Even if people in Berkeley would socially punish you for _admitting_ that you were surprised.) The "she ... her" pronouns would prompt your brain to _predict_ that the friend would appear to be female, and that prediction would be _falsified_ by someone who looked like me (as I am now). Pretending that the social-norms dispute is about chromosomes was a _bullshit_ [weakmanning](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) move on the part of Yudkowsky, [who had once written that](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates[;] [a]rguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates." Thanks to the skills I learned from Yudkowsky's _earlier_ writing, I wasn't dumb enough to fall for it, but we can imagine someone otherwise similar to me who was, who might have thereby been misled into making worse life decisions.
-
-[TODO: ↑ soften tone, be more precise, including about "dumb enough to fall for it"]
-
-If this "rationality" stuff is useful for _anything at all_, you would _expect_ it to be useful for _practical life decisions_ like _whether or not I should cut my dick off_.
-
-In order to get the _right answer_ to that policy question (whatever the right answer turns out to be), you need to _at minimum_ be able to get the _right answer_ on related fact-questions like "Is late-onset gender dysphoria in males an intersex condition?" (answer: no) and related philosophy-questions like "Can we arbitrarily redefine words such as 'woman' without adverse effects on our cognition?" (answer: no).
-
-At the cost of _wasting three years of my life_, we _did_ manage to get the philosophy question mostly right! Again, that's nice. But compared to the [Sequences-era dreams of changing the world](https://www.lesswrong.com/posts/YdcF6WbBmJhaaDqoD/the-craft-and-the-community), it's too little, too slow, too late. If our public discourse is going to be this aggressively optimized for _tricking me into cutting my dick off_ (independently of the empirical cost–benefit trade-off determining whether or not I should cut my dick off), that kills the whole project for me. I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
-
-Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?"
-
-But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_.
-
-If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".)
-
-If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all.