Title: A Hill of Validity in Defense of Meaning Date: 2021-02-15 11:00 Category: commentary Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, Star Trek, Julia Serano, two-type taxonomy Status: draft > If you are silent about your pain, they'll kill you and say you enjoyed it. > > —Zora Neale Hurston So, a striking thing about my series of increasingly frustrating private conversations and subsequent public Facebook meltdown (the stress from which soon landed me in psychiatric jail, but that's [another](/2017/Mar/fresh-princess/) [story](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)) was the tendency for some threads of conversation to get _derailed_ on some variation of, "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a 2014 post by Scott Alexander, the _second_ most prominent writer in our robot cult. So, this _really_ wasn't what I was trying to talk about; _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory in psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words. Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—not just as a formal profession of modesty, but _actually_ wrong in the real world. But this "I can define the word _woman_ any way I want" mind game? _That_ part was _absolutely_ clear-cut. That part of the argument, I knew I could win. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight, in which Yudkowsky pounded home this _exact_ point _over and over and over again_, that word and category definitions are _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"— > ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) > ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) > ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) > ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels) > ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression) > ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) > ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) > ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) > ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) > ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) [TODO: contrast "... Not Man for the Categories" to "Against Lie Inflation"; When the topic at hand is how to define "lying", Alexander Scott has written exhaustively about the dangers of strategic equivocation ("Worst Argument", "Brick in the Motte"); insofar as I can get a _coherent_ posiiton out of the conjunction of "... for the Categories" and Scott's other work, it's that he must think strategic equivocation is OK if it's for being nice to people https://slatestarcodex.com/2019/07/16/against-lie-inflation/ ] So, because I trusted people in my robot cult to be dealing in good faith rather than fucking with me because of their political incentives, I took the bait. I ended up spending three years of my life re-explaining the relevant philosophy-of-language issues in exhaustive, _exhaustive_ detail. At first I did this in the object-level context of gender on this blog, in ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), and the ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/). And that would have been the end of the philosophy-of-language track specifically ... Later, after [Eliezer Yudkowsky joined in the mind games on Twitter in November 2018](https://twitter.com/ESYudkowsky/status/1067183500216811521) [(archived)](https://archive.is/ChqYX), I _flipped the fuck out_, and ended up doing more [stictly abstract philosophy-of-language work](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [on](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [the](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception) [robot](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution)-[cult](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) [blog](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception). An important thing to appreciate is that the philosophical point I was trying to make has _absolutely nothing to do with gender_. In 2008, Yudkowsky had explained that _for all_ nouns N, you can't define _N_ any way you want, because _useful_ definitions need to "carve reality at the joints." It [_follows logically_](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization) that, in particular, if _N_ := "woman", you can't define the word _woman_ any way you want. Maybe trans women _are_ women! But if so—that is, if you want people to agree to that word usage—you need to be able to _argue_ for why that usage makes sense on the empirical merits; you can't just _define_ it to be true. And this is a _general_ principle of how language works, not something I made up on the spot in order to attack trans people. In 2008, this very general philosophy of language lesson was _not politically controversial_. If, in 2018–present, it _is_ politically controversial (specifically because of the fear that someone will try to apply it with _N_ := "woman"), that's a _problem_ for our whole systematically-correct-reasoning project! What counts as good philosophy—or even good philosophy _pedagogy_—shouldn't depend on the current year! There is a _sense in which_ one might say that you "can" define a word any way you want. That is: words don't have intrinsic ontologically-basic meanings. We can imagine an alternative world where people spoke a language that was _like_ the English of our world, except that they use the word "tree" to refer to members of the empirical entity-cluster that we call "dogs" and _vice versa_, and it's hard to think of a meaningful sense in which one convention is "right" and the other is "wrong". But there's also an important _sense in which_ we want to say that you "can't" define a word any way you want. That is: some ways of using words work better for transmitting information from one place to another. It would be harder to explain your observations from a trip to the local park in a language that used the word "tree" to refer to members of _either_ of the empirical entity-clusters that the English of our world calls "dogs" and "trees", because grouping together things that aren't relevantly similar like that makes it harder to describe differences between the wagging-animal-trees and the leafy-plant-trees. If you want to teach people about the philosophy of language, you should want to convey _both_ of these lessons, against naïve essentialism, _and_ against naïve anti-essentialism. If the people who are widely respected and trusted [(almost worshipped)](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) as the leaders of the systematically-correct-reasoning community, [_selectively_](https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people) teach _only_ the words-don't-have-intrinsic-ontologically-basic-meanings part when the topic at hand happens to be trans issues (because talking about the carve-reality-at-the-joints part would be [politically suicidal](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting)), then people who trust the leaders are likely to get the wrong idea about how the philosophy of language works—even if [the selective argumentation isn't _conscious_ or deliberative](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) and [even if every individual sentence they say permits a true interpretation](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). (As it is written of the fourth virtue of evenness, ["If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider."](https://www.yudkowsky.net/rational/virtues)) _Was_ it a "political" act for me to write about the cognitive function of categorization on the robot-cult blog with non-gender examples, when gender was secretly ("secretly") my _motivating_ example? In some sense, yes, but the thing you have to realize is— _Everyone else shot first_. The timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My philosophy-of-language work on the robot-cult blog (April 2019–January 2021) was (stealthily) _in response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks). I guess by now the branch is as close to settled as it's going to get? Alexander ended up [adding an edit note to the end of "... Not Man to the Categories" in December 2019](https://archive.is/1a4zV#selection-805.0-817.1), and Yudkowsky would go on to clarify his position on the philosophy of language in Facebook posts of [September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228) and [February 2021](https://www.facebook.com/yudkowsky/posts/10159421750419228). So, that's nice. [TODO: although I think even with the note, in practice, people are going to keep citing "... Not Man for the Categories" in a way that doesn't understand how the note undermines the main point] But I will confess to being quite disappointed that the public argument-tree evaluation didn't get much further, much faster? The thing you have understand about this whole debate is— _I need the correct answer in order to decide whether or not to cut my dick off_. As I've said, I _currently_ believe that cutting my dick off would be a _bad_ idea. But that's a cost–benefit judgement call based on many _contingent, empirical_ beliefs about the world. I'm obviously in the general _reference class_ of males who are getting their dicks cut off these days, and a lot of them seem to be pretty happy about it! I would be much more likely to go through with transitioning if I believed different things about the world—if I thought my beautiful pure sacred self-identity thing were a brain-intersex condition, or if I still believed in my teenage psychological-sex-differences denialism (such that there would be _axiomatically_ no worries about fitting with "other" women after transitioning), or if I were more optimistic about the degree to which HRT and surgeries approximate an actual sex change. In that November 2018 Twitter thread, [Yudkowsky wrote](https://archive.is/y5V9i): > _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act. This seems to suggest that gender pronouns in the English language as currently spoken don't have effective truth conditions. I think this is false _as a matter of cognitive science_. If someone told you, "Hey, you should come meet my friend at the mall, she is really cool and I think you'll like her," and then the friend turned out to look like me (as I am now), _you would be surprised_. (Even if people in Berkeley would socially punish you for _admitting_ that you were surprised.) The "she ... her" pronouns would prompt your brain to _predict_ that the friend would appear to be female, and that prediction would be _falsified_ by someone who looked like me (as I am now). Pretending that the social-norms dispute is about chromosomes was a _bullshit_ [weakmanning](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) move on the part of Yudkowsky, [who had once written that](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates[;] [a]rguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates." Thanks to the skills I learned from Yudkowsky's _earlier_ writing, I wasn't dumb enough to fall for it, but we can imagine someone otherwise similar to me who was, who might have thereby been misled into making worse life decisions. [TODO: ↑ soften tone, be more precise, including about "dumb enough to fall for it"] If this "rationality" stuff is useful for _anything at all_, you would _expect_ it to be useful for _practical life decisions_ like _whether or not I should cut my dick off_. In order to get the _right answer_ to that policy question (whatever the right answer turns out to be), you need to _at minimum_ be able to get the _right answer_ on related fact-questions like "Is late-onset gender dysphoria in males an intersex condition?" (answer: no) and related philosophy-questions like "Can we arbitrarily redefine words such as 'woman' without adverse effects on our cognition?" (answer: no). At the cost of _wasting three years of my life_, we _did_ manage to get the philosophy question mostly right! Again, that's nice. But compared to the [Sequences-era dreams of changing the world](https://www.lesswrong.com/posts/YdcF6WbBmJhaaDqoD/the-craft-and-the-community), it's too little, too slow, too late. If our public discourse is going to be this aggressively optimized for _tricking me into cutting my dick off_ (independently of the empirical cost–benefit trade-off determining whether or not I should cut my dick off), that kills the whole project for me. I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_. If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all. In ["The Ideology Is Not the Movement"](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/) (April 2016), Alexander describes how the content of subcultures typically departs from the ideological "rallying flag" that they formed around. [Sunni and Shia Islam](https://en.wikipedia.org/wiki/Shia%E2%80%93Sunni_relations) originally, ostensibly diverged on the question of who should rightfully succeed Muhammad as caliph, but modern-day Sunni and Shia who hate each other's guts aren't actually re-litigating a succession dispute from the 7th century C.E. Rather, pre-existing divergent social-group tendencies crystalized into distinct tribes by latching on to the succession dispute as a [simple membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests). Alexander jokingly identifies the identifying feature of our robot cult as being the belief that "Eliezer Yudkowsky is the rightful caliph": the Sequences were a rallying flag that brought together a lot of like-minded people to form a subculture with its own ethos and norms—among which Alexander includes "don't misgender trans people"—but the subculture emerged as its own entity that isn't necessarily _about_ anything outside itself. No one seemed to notice at the time, but this characterization of our movement [is actually a _declaration of failure_](https://sinceriously.fyi/cached-answers/#comment-794). There's a word, "rationalist", that I've been trying to avoid in this post, because it's the subject of so much strategic equivocation, where the motte is "anyone who studies the ideal of systematically correct reasoning, general methods of thought that result in true beliefs and successful plans", and the bailey is "members of our social scene centered around Eliezer Yudkowsky and Scott Alexander". (Since I don't think we deserve the "rationalist" brand name, I had to choose something else to refer to [the social scene](https://srconstantin.github.io/2017/08/08/the-craft-is-not-the-community.html). Hence, "robot cult.") What I would have _hoped_ for from a systematically correct reasoning community worthy of the brand name is one goddamned place in the whole goddamned world where _good arguments_ would propagate through the population no matter where they arose, "guided by the beauty of our weapons" ([following Scott Alexander](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) [following Leonard Cohen](https://genius.com/1576578)). Instead, I think what actually happens is that people like Yudkowsky and Alexander rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and conformity with the local environment)—with the result that if Yudkowsky and Alexander _aren't interested in getting the right answer_ (in public)—because getting the right answer in public would be politically suicidal—then there's no way for anyone who didn't [win the talent lottery](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) to fix the public understanding by making better arguments. It makes sense for public figures to not want to commit political suicide! Even so, it's a _problem_ if public figures whose brand is premised on the ideal of _systematically correct reasoning_, end up drawing attention and resources into a subculture that's optimized for tricking men into cutting their dick off on false pretenses. (Although note that Alexander has [specifically disclaimed aspirations or pretentions to being a "rationalist" authority figure](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/); that fate befell him without his consent because he's just too good and prolific of a writer compared to everyone else.) I'm not optimistic about the problem being fixable, either. Our robot cult _already_ gets a lot of shit from progressive-minded people for being "right-wing"—not because we are in any _useful_, non-gerrymandered sense, but because [attempts to achieve the map that reflects the territory are going to run afoul of ideological taboos for almost any ideology](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting). Because of the particular historical moment in which we live, we end up facing pressure from progressives, because—whatever our _object-level_ beliefs about (say) [sex, race, and class differences](/2020/Apr/book-review-human-diversity/)—and however much many of us would prefer not to talk about them—on the _meta_ level, our creed requires us to admit _it's an empirical question_, not a moral one—and that [empirical questions have no privileged reason to admit convenient answers](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god). I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than having to do with American politics or "the left" in particular. In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a [positive technical discipline](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) adjacent to game theory, not yoked to a particular normative agenda). Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to _prove_ that everyone smart knows it, because everyone smart is very careful what they say in public. (I am not smart.) Scott Aaronson wrote of [the Kolmogorov Option](https://www.scottaaronson.com/blog/?p=3376) (which Alexander aptly renamed [Kolmorogov complicity](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/): serve the cause of Truth by cultivating a bubble that focuses on truths that won't get you in trouble with the local political authorities. This after the Soviet mathematician Andrey Kolmogorov, who _knew better than to pick fights he couldn't win_. Becuase of the conflict, and because all the prominent high-status people are running a Kolmogorov Option strategy, and because we happen to have to a _wildly_ disproportionate number of _people like me_ around, I think being "pro-trans" ended up being part of the community's "shield" against external political pressure, of the sort that perked up after [the February 2021 _New York Times_ hit piece about Alexander's blog](https://archive.is/0Ghdl). (The _magnitude_ of heat brought on by the recent _Times_ piece and its aftermath was new, but the underlying dynamics had been present for years.) Jacob Falkovich notes, ["The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders."](https://twitter.com/yashkaf/status/1275524303430262790) [Aaronson notes (in commentary on the _Times_ article)](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" as something that would have "complicated the picture" of our portrayal as anti-feminist. Even the _haters_ grudgingly give Alexander credit for "... Not Man for the Categories": ["I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least awknowledging you were wrong."](https://archive.is/SlJo1) Given these political realities, you'd think that I _should_ be sympathetic to the Kolmogorov Option argument, which makes a lot of sense. _Of course_ all the high-status people with a public-facing mission (like building a movement to prevent the coming robot apocalypse) are going to be motivatedly dumb about trans stuff in public: look at all the damage [the _other_ Harry Potter author did to her legacy](https://en.wikipedia.org/wiki/Politics_of_J._K._Rowling#Transgender_people). And, historically, it would have been harder for the robot cult to recruit _me_ (or those like me) back in the 'aughts, if they had been less politically correct. Recall that I was already somewhat turned off, then, by what I thought of as _sexism_; I stayed because the philosophy-of-science blogging was _way too good_. But what that means on the margin is that someone otherwise like me except more orthodox or less philosophical, _would_ have bounced. If [Cthulhu has swum left](https://www.unqualified-reservations.org/2009/01/gentle-introduction-to-unqualified/) over the intervening thirteen years, then maintaining the same map-revealing/not-alienating-orthodox-recruits tradeoff _relative_ to the general population, necessitates relinquishing parts of the shared map that have fallen of general favor. Ultimately, if the people with influence over the trajectory of the systematically correct reasoning "community" aren't interested in getting the right answers in public, then I think we need to give up on the idea of there _being_ a "community", which, you know, might have been a dumb idea to begin with. No one owns _reasoning itself_. Yudkowsky had written in March 2009 that rationality is the ["common interest of many causes"](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes): that proponents of causes-that-benefit-from-better-reasoning like atheism or marijuana legalization or existential-risk-reduction might perceive a shared interest in cooperating to [raise the sanity waterline](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline). But to do that, they need to not try to capture all the value they create: some of the resources you invest in teaching rationality are going to flow to someone else's cause, and you need to be okay with that. But Alexander's ["Kolmogorov Complicity"](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) (October 2017) seems to suggest a starkly different moral, that "rationalist"-favored causes might not _want_ to associate with others that have worse optics. Atheists and marijuana legalization proponents and existential-risk-reducers probably don't want any of the value they create to flow to neoreactionaries and race realists and autogynephilia truthers, if video of the flow will be used to drag their own names through the mud. [_My_ Something to Protect](/2019/Jul/the-source-of-our-power/) requires me to take the [Leeroy Jenkins](https://en.wikipedia.org/wiki/Leeroy_Jenkins) Option. (As typified by Justin Murphy: ["Say whatever you believe to be true, in uncalculating fashion, in whatever language you really think and speak with, to everyone who will listen."](https://otherlife.co/respectability-is-not-worth-it-reply-to-slatestarcodex/)) I'm eager to cooperate with people facing different constraints who are stuck with a Kolmogorov Option strategy as long as they don't _fuck with me_. But I construe encouragement of the conflation of "rationality" as a "community" and the _subject matter_ of systematically correct reasoning, as a form of fucking with me: it's a _problem_ if all our beautiful propaganda about the methods of seeking Truth, doubles as propaganda for joining a robot cult whose culture is heavily optimized for tricking men like me into cutting their dicks off. Someone asked me: "If we randomized half the people at [OpenAI](https://openai.com/) to use trans pronouns one way, and the other half to use it the other way, do you think they would end up with significantly different productivity?" But the thing I'm objecting to is a lot more fundamental than the specific choice of pronoun convention, which obviously isn't going to be uniquely determined. Turkish doesn't have gender pronouns, and that's fine. Naval ships traditionally take feminine pronouns in English, and it doesn't confuse anyone into thinking boats have a womb. [Many other languages are much more gendered than English](https://en.wikipedia.org/wiki/Grammatical_gender#Distribution_of_gender_in_the_world's_languages) (where pretty much only third-person singular pronouns are at issue). The conventions used in one's native language probably _do_ [color one's thinking to some extent](/2020/Dec/crossing-the-line/)—but when it comes to that, I have no reason to expect the overall design of English grammar and vocabulary "got it right" where Spanish or Arabic "got it wrong." What matters isn't the specific object-level choice of pronoun or bathroom conventions; what matters is having a culture where people _viscerally care_ about minimizing the expected squared error of our probabilistic predictions, even at the expense of people's feelings—[_especially_ at the expense of people's feelings](http://zackmdavis.net/blog/2016/09/bayesomasochism/). I think looking at [our standard punching bag of theism](https://www.lesswrong.com/posts/dLL6yzZ3WKn8KaSC3/the-uniquely-awful-example-of-theism) is a very fair comparison. Religious people aren't _stupid_. You can prove theorems about the properties of [Q-learning](https://en.wikipedia.org/wiki/Q-learning) or [Kalman filters](https://en.wikipedia.org/wiki/Kalman_filter) at a world-class level without encountering anything that forces you to question whether Jesus Christ died for our sins. But [beyond technical mastery of one's narrow specialty](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory), there's going to be some competence threshold in ["seeing the correspondence of mathematical structures to What Happens in the Real World"](https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai) that _forces_ correct conclusions. I actually _don't_ think you can be a believing Christian and invent [the concern about consequentialists embedded in the Solomonoff prior](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). But the _same_ general parsimony-skill that rejects belief in an epiphenomenal ["God of the gaps"](https://en.wikipedia.org/wiki/God_of_the_gaps) that is verbally asserted to exist but will never the threat of being empirically falsified, _also_ rejects belief in an epiphenomenal "gender of the gaps" that is verbally asserted to exist but will never face the threat of being empirically falsified. In a world where sexual dimorphism didn't exist, where everyone was a hermaphrodite, then "gender" wouldn't exist, either. In a world where we _actually had_ magical perfect sex-change technology of the kind described in "Changing Emotions", then people who wanted to change sex would do so, and everyone else would use the corresponding language (pronouns and more), _not_ as a courtesy, _not_ to maximize social welfare, but because it _straightforwardly described reality_. In a world where we don't _have_ magical perfect sex-change technology, but we _do_ have hormone replacement therapy and various surgical methods, you actually end up with _four_ clusters: females (F), males (M), masculinized females a.k.a. trans men (FtM), and feminized males a.k.a. trans women (MtF). I _don't_ have a "clean" philosophical answer as to in what contexts one should prefer to use a {F, MtF}/{M, FtM} category system (treating trans people as their social gender) rather than a {F, FtM}/{M, MtF} system (considering trans people as their [developmental sex](/2019/Sep/terminology-proposal-developmental-sex/)), because that's a complicated semi-empirical, semi-value question about which aspects of reality are most relevant to what you're trying think about in that context. But I do need _the language with which to write this paragraph_, which is about _modeling reality_, and not about marginalization or respect. Something I have trouble reliably communicating about what I'm trying to do with this blog is that "I don't do policy." Almost everything I write is _at least_ one meta level up from any actual decisions. I'm _not_ trying to tell other people in detail how they should live their lives, because obviously I'm not smart enough to do that and get the right answer. I'm _not_ telling anyone to detransition. I'm _not_ trying to set government policy about locker rooms or medical treatments. I'm trying to _get the theory right_. My main victory condition is getting the two-type taxonomy (or whatever more precise theory supplants it) into the _standard_ sex ed textbooks. If you understand the nature of the underlying psychological condition _first_, then people can make a sensible decision about what to _do_ about it. Accurate beliefs should inform policy, rather than policy determining what beliefs are politically acceptable. My enemy is this _culture of narcissistic Orwellian mind games_ that thinks people have the right to _dictate other people's model of reality_. I don't know what the _right_ culture is, but I'm pretty sure that _this ain't it, chief_. Some trans woman on Twitter posted an anecdote complaining that the receptionist at her place of work compared her to a male celebrity. "I look like this today [photo]; how could anyone think that was a remotely acceptable thing to say?" It _is_ genuinely sad that the author of those Tweets didn't get perceived the way she would prefer! But the thing I want her to understand is— _It was a compliment!_ That poor receptionist was almost certainly thinking of [David Bowie](https://en.wikipedia.org/wiki/David_Bowie) or [Eddie Izzard](https://en.wikipedia.org/wiki/Eddie_Izzard), rather than being hateful and trying to hurt. People can recognize sex from facial structure at 96% accuracy, remember? I want a shared cultural understanding that the _correct_ way to ameliorate the genuine sadness of people not being perceived the way they prefer is through things like _better and cheaper facial feminization surgery_, not _emotionally blackmailing people out of their ability to report what they see_. In a world where surgery is expensive, but people desperately want to change sex, there's an incentive gradient in the direction of re-engineering the culture to bind our shared concept of "gender" onto things like [ornamental clothing](http://thetranswidow.com/2021/02/18/womens-clothing-is-always-drag-even-on-women/) that are easier to change than secondary sex characteristics. But [_the utility function is not up for grabs._](https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality) I don't _want_ to reliniqush my ability to notice what women's faces look like, even if that means noticing that mine isn't, even if that seems vaguely disappointing due to an idiosyncracy in my psychosexual development; I don't want people to have to _doublethink around their perceptions of me_. If I sound angry, it's because I actually _do_ feel a lot of anger, but I wish I knew how to more reliably convey its target. Some trans people who see my writing tend to assume I'm self-hating, suffering from false consciousness, that my pious appeals to objectivity and reason are [just a facade](https://sinceriously.fyi/false-faces/) concealing my collaboration with a cissexist social order, that I'm in cowardly thrall to scapegoating instincts: "I'm one of the good, compliant ones—not one of those weird bad trans people who will demand their rights! _They're_ the witches, not me; burn them, not me!" I have [no grounds to fault anyone for not taking my self-report as unquestionable](/2016/Sep/psychology-is-about-invalidating-peoples-identities/)—the urge to scapegoat and submit to the dominant player is definitely a thing—but I really think this is reading me wrong? I'm not at war with trans _people_—open, creative people who are just like me—I want to believe that even the natal females are "just like me" in some relevant abstract sense—but who read different books in a different order. I'm at war with [an _ideology_ that is adapted to appeal to people just like me](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) and commit us to remaking our lives around a set of philosophical and empirical claims that I think are _false_. Maybe that's not particularly reassuring, if people tend to identify with their ideology? (As I used to—as I _still_ do, even if my [revised ideology is much more meta](http://zackmdavis.net/blog/2017/03/dreaming-of-political-bayescraft/).) When the prototypical Christian says "Hate the sin, love the sinner", does anyone actually buy it? But what else can I do? We're living in midst of a pivotal ideological transition. (Is it still the midst, or am I too late?) Autogynephilia, as a phenomenon, is _absurdly common_ relative to the amount of cultural awareness of it _as_ a phenomenon. ([An analogy someone made on /r/GenderCriticalGuys just before it got banned](https://web.archive.org/web/20200705203105if_/https://reddit.com/r/GenderCriticalGuys/comments/hhcs34/autogynephilic_male_here_big_rant_about_denial_of/): imagine living in a Society where people _were_ gay at the same rates as in our own, but the _concept_ of homosexuality didn't exist—and was [actively suppressed whenever someone tried to point it out](/2017/Jan/if-the-gay-community-were-like-the-trans-community/).) Surveys of college students found that 13% (Table 3 in [Person _et al._](/papers/person_et_al-gender_differences_in_sexual_behaviors.pdf)) or 5.6% (Table 5 in the replication [Hsu _et al._](/papers/hsu_et_al-gender_differences_in_sexual_fantasy.pdf)) of males have fantasized about being the opposite sex in the last 3 months. What happens when every sensitive bookish male who thinks [it might be cool to be a woman](https://xkcd.com/535/) gets subjected to an aggressive recruitment campaign that the scintillating thought is _literally true_, simply because he thought it? (Not just that it could _become_ true _in a sense_, depending on the success of medical and social interventions, and depending on what sex/gender concept definition makes sense to use in a given context.) What kind of Society is that to live in? [I have seen the destiny of my neurotype, and am putting forth a convulsive effort to wrench it off its path. My weapon is clear writing.](https://www.lesswrong.com/posts/i8q4vXestDkGTFwsc/human-evil-and-muddled-thinking) Maybe the rest of my robot cult (including the founders and leaders) have given up on trying to tell the truth, but _I_ haven't. If I just keep blogging careful explanations of my thinking, eventually it might make some sort of impact—a small corrective tug on the madness of the _Zeitgeist_. It worked once, right? (Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of [interpretive labor](https://acesounderglass.com/2015/06/09/interpretive-labor/)!") > An extreme case in point of "handwringing about the Overton Window in fact constituted the Overton Window's implementation" OK, now apply that to your Kolomogorov cowardice https://twitter.com/ESYudkowsky/status/1373004525481598978 https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainstream-academia?commentId=6GD86zE5ucqigErXX > The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable (!!) http://www.hpmor.com/chapter/47 https://www.hpmor.com/chapter/97 > one technique was to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited.