-Fortunately, Yudkowsky's writing had brought together a whole community of brilliant people dedicated to refining the art of human rationality—the methods of acquiring true beliefs and using them to make decisions that get you what you want. So now that I _know_ the public narrative is obviously false, and that I have the outlines of a better theory (even though I could use a lot of help pinning down the details, and I don't know what the social policy implications are, because the optimal policy computation is a complicated value trade-off), all I _should_ have to do is carefully explain why the public narrative is delusional, and then because my arguments are so much better, all the smart serious rational people will either agree with me, or at least be eager to _clarify_ exactly where they disagree and what their alternative theory is, so that we can move the state of public knowledge forward together, in order to help the great common task of optimizing the universe in accordance with humane values.
+Fortunately, Yudkowsky's writing had brought together a whole community of brilliant people dedicated to refining the art of human rationality—the methods of acquiring true beliefs and using them to make decisions that get you what you want. So now that I _know_ the public narrative is obviously false, and that I have the outlines of a better theory (even though I could use a lot of help pinning down the details, and I don't know what the social policy implications are, because the optimal policy computation is a complicated value trade-off), all I _should_ have to do is carefully explain why the public narrative is delusional, and then because my arguments are so much better, all the intellectually serious people will either agree with me (in public), or at least be eager to _clarify_ (in public) exactly where they disagree and what their alternative theory is, so that we can move the state of humanity's knowledge forward together, in order to help the great common task of optimizing the universe in accordance with humane values.
+
+Of course, this is kind of a niche topic—if you're not a male with this psychological condition, or a woman who doesn't want to share all female-only spaces with them, you probably have no reason to care—but there are a _lot_ of males with this psychological condition around here! If this whole "rationality" subculture isn't completely fake, then we should be interested in getting the correct answers in public _for ourselves_.
+
+Men who fantasize about being women do not particularly resemble actual women! We just—don't? This seems kind of obvious, really? _Telling the difference between fantasy and reality_ is kind of an important life skill?! Notwithstanding that some males might want to make use of medical interventions like surgery and hormone replacement therapy to become facsimiles of women as far as our existing technology can manage, and that a free and enlightened transhumanist Society should support that as an option—and notwithstanding that _she_ is obviously the correct pronoun for people who _look_ like women—it's probably going to be harder for people to figure out what the optimal decisions are if no one is allowed to use language like "actual women" that clearly distinguishes the original thing from imperfect facsimiles?!
+
+The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041.
+
+Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our maps of the things we _can_ talk about, or about the laws of mapmaking itself. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) and [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work.
+
+[...]
+
+> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)