-Fortunately, Yudkowsky's writing had brought together a whole community of brilliant people dedicated to refining the art of human rationality—the methods of acquiring true beliefs and using them to make decisions that get you what you want. So now that I _know_ the public narrative is obviously false, and that I have the outlines of a better theory (even though I could use a lot of help pinning down the details, and I don't know what the social policy implications are, because the optimal policy computation is a complicated value trade-off), all I should have to do is carefully explain why the public narrative is delusional, and then because my arguments are so much better, all the smart serious rational people will either agree with me, or at least be eager to _clarify_ exactly where they disagree and what their alternative theory is, so that we can move the state of public knowledge forward together, in order to help the great common task of optimizing the universe in accordance with humane values.
+Fortunately, Yudkowsky's writing had brought together a whole community of brilliant people dedicated to refining the art of human rationality—the methods of acquiring true beliefs and using them to make decisions that get you what you want. So now that I _know_ the public narrative is obviously false, and that I have the outlines of a better theory (even though I could use a lot of help pinning down the details, and I don't know what the social policy implications are, because the optimal policy computation is a complicated value trade-off), all I _should_ have to do is carefully explain why the public narrative is delusional, and then because my arguments are so much better, all the intellectually serious people will either agree with me (in public), or at least be eager to _clarify_ (in public) exactly where they disagree and what their alternative theory is, so that we can move the state of humanity's knowledge forward together, in order to help the great common task of optimizing the universe in accordance with humane values.
+
+Of course, this is kind of a niche topic—if you're not a male with this psychological condition, or a woman who doesn't want to share all female-only spaces with them, you probably have no reason to care—but there are a _lot_ of males with this psychological condition around here! If this whole "rationality" subculture isn't completely fake, then we should be interested in getting the correct answers in public _for ourselves_.
+
+Men who fantasize about being women do not particularly resemble actual women! We just—don't? This seems kind of obvious, really? _Telling the difference between fantasy and reality_ is kind of an important life skill?! Notwithstanding that some males might want to make use of medical interventions like surgery and hormone replacement therapy to become facsimiles of women as far as our existing technology can manage, and that a free and enlightened transhumanist Society should support that as an option—and notwithstanding that _she_ is obviously the correct pronoun for people who _look_ like women—it's probably going to be harder for people to figure out what the optimal decisions are if no one is allowed to use language like "actual women" that clearly distinguishes the original thing from imperfect facsimiles?!
+
+The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041.
+
+[TODO: or at least, even if there are things we can't talk about, we should at least want to avoid dark side epistemology. Briefly tell the story of the Category War?—but try to keep it brief and not-personal; the focus should be on dark side epistemology, rather than re-picking my fight with S.A. or E.Y. (maybe don't name them, but describe the abstract dynamics and link). "Everyone else shot first." Wasn't what I was trying to talk about, but I took the bait. For me, this isn't just a "political" topic—I actually need the right answer in order to decide whether or not to cut my dick off]
+
+Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?"
+
+But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_.
+
+If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".)
+
+If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all.
+
+[TODO: risk factor of people getting drawn in to a subculture that claims to be about reasoning, but is actualy very heavily optimized for cutting boys dicks off. "The Ideology Is Not the Movement" is very explicit about this!! People use trans as political cover; no one seemed to notice that "The Ideology Is Not the Movement" is a declaration of _failure_]
+
+Someone asked me: "If we randomized half the people at [OpenAI](https://openai.com/) to use trans pronouns one way, and the other half to use it the other way, do you think they would end up with significantly different productivity?"