-What happens when every sensitive bookish male who thinks [it might be cool to be a woman](https://xkcd.com/535/) gets subjected to an aggressive recruitment campaign that the scintillating thought is _literally true_, simply because he thought it? (Not just that it could _become_ true _in a sense_, depending on the success of medical and social interventions, and depending on what sex/gender concept definition makes sense to use in a given context.) What kind of Society is that to live in?
-
-[I have seen the destiny of my neurotype, and am putting forth a convulsive effort to wrench it off its path. My weapon is clear writing.](https://www.lesswrong.com/posts/i8q4vXestDkGTFwsc/human-evil-and-muddled-thinking) Maybe the rest of my robot cult (including the founders and leaders) have given up on trying to tell the truth, but _I_ haven't. If I just keep blogging careful explanations of my thinking, eventually it might make some sort of impact—a small corrective tug on the madness of the _Zeitgeist_.
-
-It worked once, right?
-
-(Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of [interpretive labor](https://acesounderglass.com/2015/06/09/interpretive-labor/)!")
-
-
-> An extreme case in point of "handwringing about the Overton Window in fact constituted the Overton Window's implementation"
-OK, now apply that to your Kolomogorov cowardice
-https://twitter.com/ESYudkowsky/status/1373004525481598978
-
-The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041.
-
-Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about—like, say, the role of Bayesian reasoning in the philosophy of language. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) of [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work.
-
-
-https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainstream-academia?commentId=6GD86zE5ucqigErXX
-> The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable
-(!!)
-
-http://www.hpmor.com/chapter/47
-https://www.hpmor.com/chapter/97
-> one technique was to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited.
-
-
-> At least, I have a MASSIVE home territory advantage because I can appeal to Eliezer's writings from 10 years ago, and ppl can't say "Eliezer who? He's probably a bad man"
-
-> Makes sense... just don't be shocked if the next frontier is grudging concessions that get compartmentalized