-If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".)
-
-If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all.
-
-In ["The Ideology Is Not the Movement"](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/) (April 2016), Alexander describes how the content of subcultures typically departs from the ideological "rallying flag" that they formed around. [Sunni and Shia Islam](https://en.wikipedia.org/wiki/Shia%E2%80%93Sunni_relations) originally, ostensibly diverged on the question of who should rightfully succeed Muhammad as caliph, but modern-day Sunni and Shia who hate each other's guts aren't actually re-litigating a succession dispute from the 7th century C.E. Rather, pre-existing divergent social-group tendencies crystalized into distinct tribes by latching on to the succession dispute as a [simple membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests).
-
-Alexander jokingly identifies the identifying feature of our robot cult as being the belief that "Eliezer Yudkowsky is the rightful caliph": the Sequences were a rallying flag that brought together a lot of like-minded people to form a subculture with its own ethos and norms—among which Alexander includes "don't misgender trans people"—but the subculture emerged as its own entity that isn't necessarily _about_ anything outside itself.
-
-No one seemed to notice at the time, but this characterization of our movement [is actually a _declaration of failure_](https://sinceriously.fyi/cached-answers/#comment-794). There's a word, "rationalist", that I've been trying to avoid in this post, because it's the subject of so much strategic equivocation, where the motte is "anyone who studies the ideal of systematically correct reasoning, general methods of thought that result in true beliefs and successful plans", and the bailey is "members of our social scene centered around Eliezer Yudkowsky and Scott Alexander". (Since I don't think we deserve the "rationalist" brand name, I had to choose something else to refer to [the social scene](https://srconstantin.github.io/2017/08/08/the-craft-is-not-the-community.html). Hence, "robot cult.")
-
-What I would have _hoped_ for from a systematically correct reasoning community worthy of the brand name is one goddamned place in the whole goddamned world where _good arguments_ would propagate through the population no matter where they arose, "guided by the beauty of our weapons" ([following Scott Alexander](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) [following Leonard Cohen](https://genius.com/1576578)).
-
-Instead, I think what actually happens is that people like Yudkowsky and Alexander rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and conformity with the local environment)—with the result that if Yudkowsky and Alexander _aren't interested in getting the right answer_ (in public)—because getting the right answer in public would be politically suicidal—then there's no way for anyone who didn't [win the talent lottery](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) to fix the public understanding by making better arguments.
-
-It makes sense for public figures to not want to commit political suicide! Even so, it's a _problem_ if public figures whose brand is premised on the ideal of _systematically correct reasoning_, end up drawing attention and resources into a subculture that's optimized for tricking men into cutting their dick off on false pretenses. (Although note that Alexander has [specifically disclaimed aspirations or pretentions to being a "rationalist" authority figure](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/); that fate befell him without his consent because he's just too good and prolific of a writer compared to everyone else.)
-
-I'm not optimistic about the problem being fixable, either. Our robot cult _already_ gets a lot of shit from progressive-minded people for being "right-wing"—not because we are in any _useful_, non-gerrymandered sense, but because [attempts to achieve the map that reflects the territory are going to run afoul of ideological taboos for almost any ideology](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting).
-
-Because of the particular historical moment in which we live, we end up facing pressure from progressives, because—whatever our _object-level_ beliefs about (say) [sex, race, and class differences](/2020/Apr/book-review-human-diversity/)—and however much many of us would prefer not to talk about them—on the _meta_ level, our creed requires us to admit _it's an empirical question_, not a moral one—and that [empirical questions have no privileged reason to admit convenient answers](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god).
-
-I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than having to do with American politics or "the left" in particular. In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a [positive technical discipline](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) adjacent to game theory, not yoked to a particular normative agenda).
-
-Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to _prove_ that everyone smart knows it, because everyone smart is very careful what they say in public. (I am not smart.) Scott Aaronson wrote of [the Kolmogorov Option](https://www.scottaaronson.com/blog/?p=3376) (which Alexander aptly renamed [Kolmorogov complicity](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/): serve the cause of Truth by cultivating a bubble that focuses on truths that won't get you in trouble with the local political authorities. This after the Soviet mathematician Andrey Kolmogorov, who _knew better than to pick fights he couldn't win_.
-
-Becuase of the conflict, and because all the prominent high-status people are running a Kolmogorov Option strategy, and because we happen to have to a _wildly_ disproportionate number of _people like me_ around, I think being "pro-trans" ended up being part of the community's "shield" against external political pressure, of the sort that perked up after [the February 2021 _New York Times_ hit piece about Alexander's blog](https://archive.is/0Ghdl). (The _magnitude_ of heat brought on by the recent _Times_ piece and its aftermath was new, but the underlying dynamics had been present for years.)
-
-Jacob Falkovich notes, ["The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders."](https://twitter.com/yashkaf/status/1275524303430262790) [Aaronson notes (in commentary on the _Times_ article)](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" as something that would have "complicated the picture" of our portrayal as anti-feminist.
-
-Even the _haters_ grudgingly give Alexander credit for "... Not Man for the Categories": ["I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least awknowledging you were wrong."](https://archive.is/SlJo1)
-
-Given these political realities, you'd think that I _should_ be sympathetic to the Kolmogorov Option argument, which makes a lot of sense. _Of course_ all the high-status people with a public-facing mission (like building a movement to prevent the coming robot apocalypse) are going to be motivatedly dumb about trans stuff in public: look at all the damage [the _other_ Harry Potter author did to her legacy](https://en.wikipedia.org/wiki/Politics_of_J._K._Rowling#Transgender_people).
-
-And, historically, it would have been harder for the robot cult to recruit _me_ (or those like me) back in the 'aughts, if they had been less politically correct. Recall that I was already somewhat turned off, then, by what I thought of as _sexism_; I stayed because the philosophy-of-science blogging was _way too good_. But what that means on the margin is that someone otherwise like me except more orthodox or less philosophical, _would_ have bounced. If [Cthulhu has swum left](https://www.unqualified-reservations.org/2009/01/gentle-introduction-to-unqualified/) over the intervening thirteen years, then maintaining the same map-revealing/not-alienating-orthodox-recruits tradeoff _relative_ to the general population, necessitates relinquishing parts of the shared map that have fallen of general favor.
-
-Ultimately, if the people with influence over the trajectory of the systematically correct reasoning "community" aren't interested in getting the right answers in public, then I think we need to give up on the idea of there _being_ a "community", which, you know, might have been a dumb idea to begin with. No one owns _reasoning itself_. Yudkowsky had written in March 2009 that rationality is the ["common interest of many causes"](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes): that proponents of causes-that-benefit-from-better-reasoning like atheism or marijuana legalization or existential-risk-reduction might perceive a shared interest in cooperating to [raise the sanity waterline](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline). But to do that, they need to not try to capture all the value they create: some of the resources you invest in teaching rationality are going to flow to someone else's cause, and you need to be okay with that.
-
-But Alexander's ["Kolmogorov Complicity"](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) (October 2017) seems to suggest a starkly different moral, that "rationalist"-favored causes might not _want_ to associate with others that have worse optics. Atheists and marijuana legalization proponents and existential-risk-reducers probably don't want any of the value they create to flow to neoreactionaries and race realists and autogynephilia truthers, if video of the flow will be used to drag their own names through the mud.
-
-[_My_ Something to Protect](/2019/Jul/the-source-of-our-power/) requires me to take the [Leeroy Jenkins](https://en.wikipedia.org/wiki/Leeroy_Jenkins) Option. (As typified by Justin Murphy: ["Say whatever you believe to be true, in uncalculating fashion, in whatever language you really think and speak with, to everyone who will listen."](https://otherlife.co/respectability-is-not-worth-it-reply-to-slatestarcodex/)) I'm eager to cooperate with people facing different constraints who are stuck with a Kolmogorov Option strategy as long as they don't _fuck with me_. But I construe encouragement of the conflation of "rationality" as a "community" and the _subject matter_ of systematically correct reasoning, as a form of fucking with me: it's a _problem_ if all our beautiful propaganda about the methods of seeking Truth, doubles as propaganda for joining a robot cult whose culture is heavily optimized for tricking men like me into cutting their dicks off.
-
-Someone asked me: "If we randomized half the people at [OpenAI](https://openai.com/) to use trans pronouns one way, and the other half to use it the other way, do you think they would end up with significantly different productivity?"
-
-But the thing I'm objecting to is a lot more fundamental than the specific choice of pronoun convention, which obviously isn't going to be uniquely determined. Turkish doesn't have gender pronouns, and that's fine. Naval ships traditionally take feminine pronouns in English, and it doesn't confuse anyone into thinking boats have a womb. [Many other languages are much more gendered than English](https://en.wikipedia.org/wiki/Grammatical_gender#Distribution_of_gender_in_the_world's_languages) (where pretty much only third-person singular pronouns are at issue). The conventions used in one's native language probably _do_ [color one's thinking to some extent](/2020/Dec/crossing-the-line/)—but when it comes to that, I have no reason to expect the overall design of English grammar and vocabulary "got it right" where Spanish or Arabic "got it wrong."
-
-What matters isn't the specific object-level choice of pronoun or bathroom conventions; what matters is having a culture where people _viscerally care_ about minimizing the expected squared error of our probabilistic predictions, even at the expense of people's feelings—[_especially_ at the expense of people's feelings](http://zackmdavis.net/blog/2016/09/bayesomasochism/).
-
-I think looking at [our standard punching bag of theism](https://www.lesswrong.com/posts/dLL6yzZ3WKn8KaSC3/the-uniquely-awful-example-of-theism) is a very fair comparison. Religious people aren't _stupid_. You can prove theorems about the properties of [Q-learning](https://en.wikipedia.org/wiki/Q-learning) or [Kalman filters](https://en.wikipedia.org/wiki/Kalman_filter) at a world-class level without encountering anything that forces you to question whether Jesus Christ died for our sins. But [beyond technical mastery of one's narrow specialty](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory), there's going to be some competence threshold in ["seeing the correspondence of mathematical structures to What Happens in the Real World"](https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai) that _forces_ correct conclusions. I actually _don't_ think you can be a believing Christian and invent [the concern about consequentialists embedded in the Solomonoff prior](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/).
-
-But the _same_ general parsimony-skill that rejects belief in an epiphenomenal ["God of the gaps"](https://en.wikipedia.org/wiki/God_of_the_gaps) that is verbally asserted to exist but will never the threat of being empirically falsified, _also_ rejects belief in an epiphenomenal "gender of the gaps" that is verbally asserted to exist but will never face the threat of being empirically falsified.
-
-In a world where sexual dimorphism didn't exist, where everyone was a hermaphrodite, then "gender" wouldn't exist, either.
-
-In a world where we _actually had_ magical perfect sex-change technology of the kind described in "Changing Emotions", then people who wanted to change sex would do so, and everyone else would use the corresponding language (pronouns and more), _not_ as a courtesy, _not_ to maximize social welfare, but because it _straightforwardly described reality_.
-
-In a world where we don't _have_ magical perfect sex-change technology, but we _do_ have hormone replacement therapy and various surgical methods, you actually end up with _four_ clusters: females (F), males (M), masculinized females a.k.a. trans men (FtM), and feminized males a.k.a. trans women (MtF). I _don't_ have a "clean" philosophical answer as to in what contexts one should prefer to use a {F, MtF}/{M, FtM} category system (treating trans people as their social gender) rather than a {F, FtM}/{M, MtF} system (considering trans people as their [developmental sex](/2019/Sep/terminology-proposal-developmental-sex/)), because that's a complicated semi-empirical, semi-value question about which aspects of reality are most relevant to what you're trying think about in that context. But I do need _the language with which to write this paragraph_, which is about _modeling reality_, and not about marginalization or respect.
-
-Something I have trouble reliably communicating about what I'm trying to do with this blog is that "I don't do policy." Almost everything I write is _at least_ one meta level up from any actual decisions. I'm _not_ trying to tell other people in detail how they should live their lives, because obviously I'm not smart enough to do that and get the right answer. I'm _not_ telling anyone to detransition. I'm _not_ trying to set government policy about locker rooms or medical treatments.
-
-I'm trying to _get the theory right_. My main victory condition is getting the two-type taxonomy (or whatever more precise theory supplants it) into the _standard_ sex ed textbooks. If you understand the nature of the underlying psychological condition _first_, then people can make a sensible decision about what to _do_ about it. Accurate beliefs should inform policy, rather than policy determining what beliefs are politically acceptable.
-
-It worked once, right?
-
-(Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of [interpretive labor](https://acesounderglass.com/2015/06/09/interpretive-labor/)!")
-
-> An extreme case in point of "handwringing about the Overton Window in fact constituted the Overton Window's implementation"
-OK, now apply that to your Kolomogorov cowardice
-https://twitter.com/ESYudkowsky/status/1373004525481598978
-
-The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041.
-
-Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about—like, say, the role of Bayesian reasoning in the philosophy of language. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) of [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work.
-
-
-https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainstream-academia?commentId=6GD86zE5ucqigErXX
-> The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable
-(!!)
-
-http://www.hpmor.com/chapter/47
-https://www.hpmor.com/chapter/97
-> one technique was to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited.