My first clue that I wasn't living in that world came from—Eliezer Yudkowsky. (Well, not my first _clue_. In retrospect, there were lots of _clues_. My first wake-up call.) In [a 26 March 2016 Facebook post](https://www.facebook.com/yudkowsky/posts/10154078468809228), he wrote— > I'm not sure if the following generalization extends to all genetic backgrounds and childhood nutritional backgrounds. There are various ongoing arguments about estrogenlike chemicals in the environment, and those may not be present in every country ... > Still, for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women. (***!?!?!?!?***) > A lot of them don't know it or wouldn't care, because they're female-minds-in-male-bodies but also cis-by-default (lots of women wouldn't be particularly disturbed if they had a male body; the ones we know as 'trans' are just the ones with unusually strong female gender identities). Or they don't know it because they haven't heard in detail what it feels like to be gender dysphoric, and haven't realized 'oh hey that's me'. See, e.g., and (Reading _this_ post, I _did_ realize "oh hey that's me"—it's hard to believe that I'm not one of the "20% of the ones with penises" Yudkowsky is talking about here—but I wasn't sure how to reconcile that with the "are actually women" (***!?!?!?!?***) characterization, coming _specifically_ from the guy who taught me (in "Changing Emotions") how blatantly, ludicrously untrue and impossible that is.) > But I'm kinda getting the impression that when you do normalize transgender generally and MtF particularly, like not "I support that in theory!" normalize but "Oh hey a few of my friends are transitioning and nothing bad happened to them", there's a _hell_ of a lot of people who come out as trans. > If that starts to scale up, we might see a really, really interesting moral panic in 5-10 years or so. I mean, if you thought gay marriage was causing a moral panic, you just wait and see what comes next ... Indeed—here we are five years later, and _I am panicking_. (As 2007–9 Sequences-era Yudkowsky [taught me](https://www.yudkowsky.net/other/fiction/the-sword-of-good), and 2016 Facebook-shitposting-era Yudkowsky seemed to ignore, the thing that makes a moral panic really interesting is how hard it is to know you're on the right side of it—and the importance of [panicking sideways](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative) [in policyspace](https://www.overcomingbias.com/2007/05/policy_tugowar.html) when the "maximize the number of trans people" and "minimize the number of trans people" coalitions are both wrong.) At the time, this was merely _very confusing_. I left a careful comment in the Facebook thread (with the obligatory "speaking only for myself; I obviously know that I can't say anything about anyone else's experience" [disclaimer](https://www.overcomingbias.com/2008/06/against-disclai.html)), quietly puzzled at what Yudkowsky could _possibly_ be thinking ... A month later, I moved out of my mom's house in [Walnut Creek](https://en.wikipedia.org/wiki/Walnut_Creek,_California) to go live with a new roommate in an apartment on the correct side of the [Caldecott tunnel](https://en.wikipedia.org/wiki/Caldecott_Tunnel), in [Berkeley](https://en.wikipedia.org/wiki/Berkeley,_California): closer to other people in the robot-cult scene and with a shorter train ride to my coding dayjob in San Francisco. (I would later change my mind about which side of the tunnel is the correct one.) In Berkeley, I met a number of really interesting people who seemed quite similar to me along a lot of dimensions, but also very different along some other dimensions having to do with how they were currently living their life! (I see where the pattern-matching facilities in Yudkowsky's brain got that 20% figure from.) This prompted me to do a little bit more reading in some corners of the literature that I had certainly _heard of_, but hadn't already mastered and taken seriously in the previous twelve years of reading everything I could about sex and gender and transgender and feminism and evopsych. (Kay Brown's blog, [_On the Science of Changing Sex_](https://sillyolme.wordpress.com/), was especially helpful.) So, a striking thing about my series of increasingly frustrating private conversations and subsequent public Facebook meltdown (the stress from which soon landed me in psychiatric jail, but that's [another](/2017/Mar/fresh-princess/) [story](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)) was the tendency for some threads of conversation to get _derailed_ on some variation of, "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a 2014 post by Scott Alexander, the _second_ most prominent writer in our robot cult. So, this _really_ wasn't what I was trying to talk about; _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory in psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words. Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—not just as a formal profession of modesty, but _actually_ wrong in the real world. But this "I can define the word _woman_ any way I want" mind game? _That_ part was _absolutely_ clear-cut. That part of the argument, I knew I could win. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight, in which Yudkowsky pounded home this _exact_ point _over and over and over again_, that word and category definitions are _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"— > ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) > ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) > ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) > ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels) > ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression) > ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) > ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) > ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) > ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) > ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) [TODO: contrast "... Not Man for the Categories" to "Against Lie Inflation"; When the topic at hand is how to define "lying", Alexander Scott has written exhaustively about the dangers of strategic equivocation ("Worst Argument", "Brick in the Motte"); insofar as I can get a _coherent_ posiiton out of the conjunction of "... for the Categories" and Scott's other work, it's that he must think strategic equivocation is OK if it's for being nice to people https://slatestarcodex.com/2019/07/16/against-lie-inflation/ ] So, because I trusted people in my robot cult to be dealing in good faith rather than fucking with me because of their political incentives, I took the bait. I ended up spending three years of my life re-explaining the relevant philosophy-of-language issues in exhaustive, _exhaustive_ detail. At first I did this in the object-level context of gender on this blog, in ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), and the ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/). And that would have been the end of the philosophy-of-language track specifically ... Later, after [Eliezer Yudkowsky joined in the mind games on Twitter in November 2018](https://twitter.com/ESYudkowsky/status/1067183500216811521) [(archived)](https://archive.is/ChqYX), I _flipped the fuck out_, and ended up doing more [stictly abstract philosophy-of-language work](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [on](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [the](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception) [robot](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution)-[cult](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) [blog](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception). An important thing to appreciate is that the philosophical point I was trying to make has _absolutely nothing to do with gender_. In 2008, Yudkowsky had explained that _for all_ nouns N, you can't define _N_ any way you want, because _useful_ definitions need to "carve reality at the joints." It [_follows logically_](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization) that, in particular, if _N_ := "woman", you can't define the word _woman_ any way you want. Maybe trans women _are_ women! But if so—that is, if you want people to agree to that word usage—you need to be able to _argue_ for why that usage makes sense on the empirical merits; you can't just _define_ it to be true. And this is a _general_ principle of how language works, not something I made up on the spot in order to attack trans people. In 2008, this very general philosophy of language lesson was _not politically controversial_. If, in 2018–present, it _is_ politically controversial (specifically because of the fear that someone will try to apply it with _N_ := "woman"), that's a _problem_ for our whole systematically-correct-reasoning project! What counts as good philosophy—or even good philosophy _pedagogy_—shouldn't depend on the current year! There is a _sense in which_ one might say that you "can" define a word any way you want. That is: words don't have intrinsic ontologically-basic meanings. We can imagine an alternative world where people spoke a language that was _like_ the English of our world, except that they use the word "tree" to refer to members of the empirical entity-cluster that we call "dogs" and _vice versa_, and it's hard to think of a meaningful sense in which one convention is "right" and the other is "wrong". But there's also an important _sense in which_ we want to say that you "can't" define a word any way you want. That is: some ways of using words work better for transmitting information from one place to another. It would be harder to explain your observations from a trip to the local park in a language that used the word "tree" to refer to members of _either_ of the empirical entity-clusters that the English of our world calls "dogs" and "trees", because grouping together things that aren't relevantly similar like that makes it harder to describe differences between the wagging-animal-trees and the leafy-plant-trees. If you want to teach people about the philosophy of language, you should want to convey _both_ of these lessons, against naïve essentialism, _and_ against naïve anti-essentialism. If the people who are widely respected and trusted [(almost worshipped)](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) as the leaders of the systematically-correct-reasoning community, [_selectively_](https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people) teach _only_ the words-don't-have-intrinsic-ontologically-basic-meanings part when the topic at hand happens to be trans issues (because talking about the carve-reality-at-the-joints part would be [politically suicidal](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting)), then people who trust the leaders are likely to get the wrong idea about how the philosophy of language works—even if [the selective argumentation isn't _conscious_ or deliberative](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) and [even if every individual sentence they say permits a true interpretation](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). (As it is written of the fourth virtue of evenness, ["If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider."](https://www.yudkowsky.net/rational/virtues)) _Was_ it a "political" act for me to write about the cognitive function of categorization on the robot-cult blog with non-gender examples, when gender was secretly ("secretly") my _motivating_ example? In some sense, yes, but the thing you have to realize is— _Everyone else shot first_. The timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My philosophy-of-language work on the robot-cult blog (April 2019–January 2021) was (stealthily) _in response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks). I guess by now the branch is as close to settled as it's going to get? Alexander ended up [adding an edit note to the end of "... Not Man to the Categories" in December 2019](https://archive.is/1a4zV#selection-805.0-817.1), and Yudkowsky would go on to clarify his position on the philosophy of language in Facebook posts of [September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228) and [February 2021](https://www.facebook.com/yudkowsky/posts/10159421750419228). So, that's nice. [TODO: although I think even with the note, in practice, people are going to keep citing "... Not Man for the Categories" in a way that doesn't understand how the note undermines the main point] But I will confess to being quite disappointed that the public argument-tree evaluation didn't get much further, much faster? The thing you have understand about this whole debate is— _I need the correct answer in order to decide whether or not to cut my dick off_. As I've said, I _currently_ believe that cutting my dick off would be a _bad_ idea. But that's a cost–benefit judgement call based on many _contingent, empirical_ beliefs about the world. I'm obviously in the general _reference class_ of males who are getting their dicks cut off these days, and a lot of them seem to be pretty happy about it! I would be much more likely to go through with transitioning if I believed different things about the world—if I thought my beautiful pure sacred self-identity thing were a brain-intersex condition, or if I still believed in my teenage psychological-sex-differences denialism (such that there would be _axiomatically_ no worries about fitting with "other" women after transitioning), or if I were more optimistic about the degree to which HRT and surgeries approximate an actual sex change. In that November 2018 Twitter thread, [Yudkowsky wrote](https://archive.is/y5V9i): > _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act. This seems to suggest that gender pronouns in the English language as currently spoken don't have effective truth conditions. I think this is false _as a matter of cognitive science_. If someone told you, "Hey, you should come meet my friend at the mall, she is really cool and I think you'll like her," and then the friend turned out to look like me (as I am now), _you would be surprised_. (Even if people in Berkeley would socially punish you for _admitting_ that you were surprised.) The "she ... her" pronouns would prompt your brain to _predict_ that the friend would appear to be female, and that prediction would be _falsified_ by someone who looked like me (as I am now). Pretending that the social-norms dispute is about chromosomes was a _bullshit_ [weakmanning](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) move on the part of Yudkowsky, [who had once written that](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates[;] [a]rguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates." Thanks to the skills I learned from Yudkowsky's _earlier_ writing, I wasn't dumb enough to fall for it, but we can imagine someone otherwise similar to me who was, who might have thereby been misled into making worse life decisions. [TODO: ↑ soften tone, be more precise, including about "dumb enough to fall for it"] If this "rationality" stuff is useful for _anything at all_, you would _expect_ it to be useful for _practical life decisions_ like _whether or not I should cut my dick off_. In order to get the _right answer_ to that policy question (whatever the right answer turns out to be), you need to _at minimum_ be able to get the _right answer_ on related fact-questions like "Is late-onset gender dysphoria in males an intersex condition?" (answer: no) and related philosophy-questions like "Can we arbitrarily redefine words such as 'woman' without adverse effects on our cognition?" (answer: no). At the cost of _wasting three years of my life_, we _did_ manage to get the philosophy question mostly right! Again, that's nice. But compared to the [Sequences-era dreams of changing the world](https://www.lesswrong.com/posts/YdcF6WbBmJhaaDqoD/the-craft-and-the-community), it's too little, too slow, too late. If our public discourse is going to be this aggressively optimized for _tricking me into cutting my dick off_ (independently of the empirical cost–benefit trade-off determining whether or not I should cut my dick off), that kills the whole project for me. I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_. If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all. In ["The Ideology Is Not the Movement"](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/) (April 2016), Alexander describes how the content of subcultures typically departs from the ideological "rallying flag" that they formed around. [Sunni and Shia Islam](https://en.wikipedia.org/wiki/Shia%E2%80%93Sunni_relations) originally, ostensibly diverged on the question of who should rightfully succeed Muhammad as caliph, but modern-day Sunni and Shia who hate each other's guts aren't actually re-litigating a succession dispute from the 7th century C.E. Rather, pre-existing divergent social-group tendencies crystalized into distinct tribes by latching on to the succession dispute as a [simple membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests). Alexander jokingly identifies the identifying feature of our robot cult as being the belief that "Eliezer Yudkowsky is the rightful caliph": the Sequences were a rallying flag that brought together a lot of like-minded people to form a subculture with its own ethos and norms—among which Alexander includes "don't misgender trans people"—but the subculture emerged as its own entity that isn't necessarily _about_ anything outside itself. No one seemed to notice at the time, but this characterization of our movement [is actually a _declaration of failure_](https://sinceriously.fyi/cached-answers/#comment-794). There's a word, "rationalist", that I've been trying to avoid in this post, because it's the subject of so much strategic equivocation, where the motte is "anyone who studies the ideal of systematically correct reasoning, general methods of thought that result in true beliefs and successful plans", and the bailey is "members of our social scene centered around Eliezer Yudkowsky and Scott Alexander". (Since I don't think we deserve the "rationalist" brand name, I had to choose something else to refer to [the social scene](https://srconstantin.github.io/2017/08/08/the-craft-is-not-the-community.html). Hence, "robot cult.") What I would have _hoped_ for from a systematically correct reasoning community worthy of the brand name is one goddamned place in the whole goddamned world where _good arguments_ would propagate through the population no matter where they arose, "guided by the beauty of our weapons" ([following Scott Alexander](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) [following Leonard Cohen](https://genius.com/1576578)). Instead, I think what actually happens is that people like Yudkowsky and Alexander rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and conformity with the local environment)—with the result that if Yudkowsky and Alexander _aren't interested in getting the right answer_ (in public)—because getting the right answer in public would be politically suicidal—then there's no way for anyone who didn't [win the talent lottery](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) to fix the public understanding by making better arguments. It makes sense for public figures to not want to commit political suicide! Even so, it's a _problem_ if public figures whose brand is premised on the ideal of _systematically correct reasoning_, end up drawing attention and resources into a subculture that's optimized for tricking men into cutting their dick off on false pretenses. (Although note that Alexander has [specifically disclaimed aspirations or pretentions to being a "rationalist" authority figure](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/); that fate befell him without his consent because he's just too good and prolific of a writer compared to everyone else.) I'm not optimistic about the problem being fixable, either. Our robot cult _already_ gets a lot of shit from progressive-minded people for being "right-wing"—not because we are in any _useful_, non-gerrymandered sense, but because [attempts to achieve the map that reflects the territory are going to run afoul of ideological taboos for almost any ideology](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting). Because of the particular historical moment in which we live, we end up facing pressure from progressives, because—whatever our _object-level_ beliefs about (say) [sex, race, and class differences](/2020/Apr/book-review-human-diversity/)—and however much many of us would prefer not to talk about them—on the _meta_ level, our creed requires us to admit _it's an empirical question_, not a moral one—and that [empirical questions have no privileged reason to admit convenient answers](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god). I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than having to do with American politics or "the left" in particular. In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a [positive technical discipline](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) adjacent to game theory, not yoked to a particular normative agenda). Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to _prove_ that everyone smart knows it, because everyone smart is very careful what they say in public. (I am not smart.) Scott Aaronson wrote of [the Kolmogorov Option](https://www.scottaaronson.com/blog/?p=3376) (which Alexander aptly renamed [Kolmorogov complicity](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/): serve the cause of Truth by cultivating a bubble that focuses on truths that won't get you in trouble with the local political authorities. This after the Soviet mathematician Andrey Kolmogorov, who _knew better than to pick fights he couldn't win_. Becuase of the conflict, and because all the prominent high-status people are running a Kolmogorov Option strategy, and because we happen to have to a _wildly_ disproportionate number of _people like me_ around, I think being "pro-trans" ended up being part of the community's "shield" against external political pressure, of the sort that perked up after [the February 2021 _New York Times_ hit piece about Alexander's blog](https://archive.is/0Ghdl). (The _magnitude_ of heat brought on by the recent _Times_ piece and its aftermath was new, but the underlying dynamics had been present for years.) Jacob Falkovich notes, ["The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders."](https://twitter.com/yashkaf/status/1275524303430262790) [Aaronson notes (in commentary on the _Times_ article)](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" as something that would have "complicated the picture" of our portrayal as anti-feminist. Even the _haters_ grudgingly give Alexander credit for "... Not Man for the Categories": ["I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least awknowledging you were wrong."](https://archive.is/SlJo1) Given these political realities, you'd think that I _should_ be sympathetic to the Kolmogorov Option argument, which makes a lot of sense. _Of course_ all the high-status people with a public-facing mission (like building a movement to prevent the coming robot apocalypse) are going to be motivatedly dumb about trans stuff in public: look at all the damage [the _other_ Harry Potter author did to her legacy](https://en.wikipedia.org/wiki/Politics_of_J._K._Rowling#Transgender_people). And, historically, it would have been harder for the robot cult to recruit _me_ (or those like me) back in the 'aughts, if they had been less politically correct. Recall that I was already somewhat turned off, then, by what I thought of as _sexism_; I stayed because the philosophy-of-science blogging was _way too good_. But what that means on the margin is that someone otherwise like me except more orthodox or less philosophical, _would_ have bounced. If [Cthulhu has swum left](https://www.unqualified-reservations.org/2009/01/gentle-introduction-to-unqualified/) over the intervening thirteen years, then maintaining the same map-revealing/not-alienating-orthodox-recruits tradeoff _relative_ to the general population, necessitates relinquishing parts of the shared map that have fallen of general favor. Ultimately, if the people with influence over the trajectory of the systematically correct reasoning "community" aren't interested in getting the right answers in public, then I think we need to give up on the idea of there _being_ a "community", which, you know, might have been a dumb idea to begin with. No one owns _reasoning itself_. Yudkowsky had written in March 2009 that rationality is the ["common interest of many causes"](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes): that proponents of causes-that-benefit-from-better-reasoning like atheism or marijuana legalization or existential-risk-reduction might perceive a shared interest in cooperating to [raise the sanity waterline](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline). But to do that, they need to not try to capture all the value they create: some of the resources you invest in teaching rationality are going to flow to someone else's cause, and you need to be okay with that. But Alexander's ["Kolmogorov Complicity"](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) (October 2017) seems to suggest a starkly different moral, that "rationalist"-favored causes might not _want_ to associate with others that have worse optics. Atheists and marijuana legalization proponents and existential-risk-reducers probably don't want any of the value they create to flow to neoreactionaries and race realists and autogynephilia truthers, if video of the flow will be used to drag their own names through the mud. [_My_ Something to Protect](/2019/Jul/the-source-of-our-power/) requires me to take the [Leeroy Jenkins](https://en.wikipedia.org/wiki/Leeroy_Jenkins) Option. (As typified by Justin Murphy: ["Say whatever you believe to be true, in uncalculating fashion, in whatever language you really think and speak with, to everyone who will listen."](https://otherlife.co/respectability-is-not-worth-it-reply-to-slatestarcodex/)) I'm eager to cooperate with people facing different constraints who are stuck with a Kolmogorov Option strategy as long as they don't _fuck with me_. But I construe encouragement of the conflation of "rationality" as a "community" and the _subject matter_ of systematically correct reasoning, as a form of fucking with me: it's a _problem_ if all our beautiful propaganda about the methods of seeking Truth, doubles as propaganda for joining a robot cult whose culture is heavily optimized for tricking men like me into cutting their dicks off. Someone asked me: "If we randomized half the people at [OpenAI](https://openai.com/) to use trans pronouns one way, and the other half to use it the other way, do you think they would end up with significantly different productivity?" But the thing I'm objecting to is a lot more fundamental than the specific choice of pronoun convention, which obviously isn't going to be uniquely determined. Turkish doesn't have gender pronouns, and that's fine. Naval ships traditionally take feminine pronouns in English, and it doesn't confuse anyone into thinking boats have a womb. [Many other languages are much more gendered than English](https://en.wikipedia.org/wiki/Grammatical_gender#Distribution_of_gender_in_the_world's_languages) (where pretty much only third-person singular pronouns are at issue). The conventions used in one's native language probably _do_ [color one's thinking to some extent](/2020/Dec/crossing-the-line/)—but when it comes to that, I have no reason to expect the overall design of English grammar and vocabulary "got it right" where Spanish or Arabic "got it wrong." What matters isn't the specific object-level choice of pronoun or bathroom conventions; what matters is having a culture where people _viscerally care_ about minimizing the expected squared error of our probabilistic predictions, even at the expense of people's feelings—[_especially_ at the expense of people's feelings](http://zackmdavis.net/blog/2016/09/bayesomasochism/). I think looking at [our standard punching bag of theism](https://www.lesswrong.com/posts/dLL6yzZ3WKn8KaSC3/the-uniquely-awful-example-of-theism) is a very fair comparison. Religious people aren't _stupid_. You can prove theorems about the properties of [Q-learning](https://en.wikipedia.org/wiki/Q-learning) or [Kalman filters](https://en.wikipedia.org/wiki/Kalman_filter) at a world-class level without encountering anything that forces you to question whether Jesus Christ died for our sins. But [beyond technical mastery of one's narrow specialty](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory), there's going to be some competence threshold in ["seeing the correspondence of mathematical structures to What Happens in the Real World"](https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai) that _forces_ correct conclusions. I actually _don't_ think you can be a believing Christian and invent [the concern about consequentialists embedded in the Solomonoff prior](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). But the _same_ general parsimony-skill that rejects belief in an epiphenomenal ["God of the gaps"](https://en.wikipedia.org/wiki/God_of_the_gaps) that is verbally asserted to exist but will never the threat of being empirically falsified, _also_ rejects belief in an epiphenomenal "gender of the gaps" that is verbally asserted to exist but will never face the threat of being empirically falsified. In a world where sexual dimorphism didn't exist, where everyone was a hermaphrodite, then "gender" wouldn't exist, either. In a world where we _actually had_ magical perfect sex-change technology of the kind described in "Changing Emotions", then people who wanted to change sex would do so, and everyone else would use the corresponding language (pronouns and more), _not_ as a courtesy, _not_ to maximize social welfare, but because it _straightforwardly described reality_. In a world where we don't _have_ magical perfect sex-change technology, but we _do_ have hormone replacement therapy and various surgical methods, you actually end up with _four_ clusters: females (F), males (M), masculinized females a.k.a. trans men (FtM), and feminized males a.k.a. trans women (MtF). I _don't_ have a "clean" philosophical answer as to in what contexts one should prefer to use a {F, MtF}/{M, FtM} category system (treating trans people as their social gender) rather than a {F, FtM}/{M, MtF} system (considering trans people as their [developmental sex](/2019/Sep/terminology-proposal-developmental-sex/)), because that's a complicated semi-empirical, semi-value question about which aspects of reality are most relevant to what you're trying think about in that context. But I do need _the language with which to write this paragraph_, which is about _modeling reality_, and not about marginalization or respect. Something I have trouble reliably communicating about what I'm trying to do with this blog is that "I don't do policy." Almost everything I write is _at least_ one meta level up from any actual decisions. I'm _not_ trying to tell other people in detail how they should live their lives, because obviously I'm not smart enough to do that and get the right answer. I'm _not_ telling anyone to detransition. I'm _not_ trying to set government policy about locker rooms or medical treatments. I'm trying to _get the theory right_. My main victory condition is getting the two-type taxonomy (or whatever more precise theory supplants it) into the _standard_ sex ed textbooks. If you understand the nature of the underlying psychological condition _first_, then people can make a sensible decision about what to _do_ about it. Accurate beliefs should inform policy, rather than policy determining what beliefs are politically acceptable. It worked once, right? (Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of [interpretive labor](https://acesounderglass.com/2015/06/09/interpretive-labor/)!") > An extreme case in point of "handwringing about the Overton Window in fact constituted the Overton Window's implementation" OK, now apply that to your Kolomogorov cowardice https://twitter.com/ESYudkowsky/status/1373004525481598978 The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041. Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about—like, say, the role of Bayesian reasoning in the philosophy of language. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) of [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work. https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainstream-academia?commentId=6GD86zE5ucqigErXX > The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable (!!) http://www.hpmor.com/chapter/47 https://www.hpmor.com/chapter/97 > one technique was to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited. > At least, I have a MASSIVE home territory advantage because I can appeal to Eliezer's writings from 10 years ago, and ppl can't say "Eliezer who? He's probably a bad man" > Makes sense... just don't be shocked if the next frontier is grudging concessions that get compartmentalized > Stopping reading your Tweets is the correct move for them IF you construe them as only optimizing for their personal hedonics https://twitter.com/zackmdavis/status/1224433237679722500 > I aspire to make sure my departures from perfection aren't noticeable to others, so this tweet is very validating. https://twitter.com/ESYudkowsky/status/1384671335146692608 "assuming that it was a 'he'"—people treating pronouns as synonymous with sex https://www.youtube.com/watch?v=mxZBrbVqZnU I realize it wasn't personal—no one _consciously_ thinking "I'm going to trick autogynpehilic men into cutting their dicks off", but the most recent pronoun update https://www.facebook.com/yudkowsky/posts/10159421750419228 > I would not know how to write a different viewpoint as a sympathetic character. [...] > I do not know what it feels like from the inside to feel like a pronoun is attached to something in your head much more firmly than "doesn't look like an Oliver" is attached to something in your head. like the time I snuck a copy of _Men Trapped in Men's Bodies: Narratives of Autogynephilic Transsexualism_ into the [MIRI](https://intelligence.org/) office library. (It seemed like something Harry Potter-Evans-Verres would do—and ominously, I noticed, not like something Hermione Granger would do.) * the moment in October 2016 when I switched sides http://zackmdavis.net/blog/2016/10/late-onset/ http://zackmdavis.net/blog/2017/03/brand-rust/ https://www.lesswrong.com/posts/jNAAZ9XNyt82CXosr/mirrors-and-paintings > The absolute inadequacy of every single institution in the civilization of magical Britain is what happened! You cannot comprehend it, boy! I cannot comprehend it! It has to be seen and even then it cannot be believed! http://www.hpmor.com/chapter/108 EGS?? (If the world were smaller, you'd never give different people the same name; if our memories were larger, we'd give everyone a UUID.) * papal infallability / Eliezer Yudkowsky facts https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND Never go in against Eliezer Yudkowsky when anything is on the line. https://en.wikipedia.org/wiki/Chuck_Norris_facts how they would actually think about the problem in dath ilan https://www.reddit.com/r/TheMotte/comments/myr3n7/culture_war_roundup_for_the_week_of_april_26_2021/gw0nhqv/?context=3 > At some point you realize that your free bazaar of ideas has produced a core (or multiple cores). It is a chamber: semi-permeable, still receptive to external ideas and open to critique, but increasingly more connected on the inside. https://arbital.greaterwrong.com/p/domain_distance?l=7vk I'm writing to you because I'm afraid that marketing is a more powerful force than argument. Rather than good arguments propogating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like Eliezer and you rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who _didn't_ [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the _Zeitgeist_, the winning move is "persuade Scott Alexander". https://web.archive.org/web/20070615130139/http://singinst.org/upload/CFAI.html#foot-16 > 16: I flip a coin to determine whether a given human is male or female. https://www.facebook.com/yudkowsky/posts/10159611207744228?comment_id=10159611208509228&reply_comment_id=10159613820954228 > In the circles I run in, being poly isn't very political, just a sexual orientation like any other—it's normalized the way that LGBT is normalized in saner circles, not political the way that LGBT is political in crazier circles. https://archive.is/7Wolo > the massive correlation between exposure to Yudkowsky's writings and being a trans woman (can't bother to do the calculations but the connection is absurdly strong) Namespace's point about the two EYs [stonewalling](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters) The level above "Many-worlds is obviously correct, stop being stupid" is "Racial IQ differences are obviously real; stop being stupid" Anyway, four years later, it turns out that this whole "rationality" subculture is completely fake. The thing that convinced me of this was not _even_ the late-onset-gender-dysphoria-in-males-is-not-an-intersex-condition thesis that I was originally trying to talk about. Humans are _really complicated_: no matter how "obvious" something in psychology or social science to me, I can't write someone off entirely simply for disagreeing, because the whole domain is so complex that I always have to acknowledge that, ultimately, I could just be wrong. But in the _process_ of trying to _talk about_ this late-onset-gender-dysphoria-in-males-is-not-an-intersex-condition thesis, I noticed that my conversations kept getting _derailed_ on some variation of "The word _woman_ doesn't necessarily mean that." _That_ part of the debate, I knew I could win. what the math actually means in the real world from "Reply to Holden" I guess I feel pretty naïve now, but—I _actually believed our own propoganda_. I _actually thought_ we were doing something new and special of historical and possibly even _cosmological_ significance. I got a pingback to "Optimized Propaganda" from in an "EDIT 5/21/2021" on https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency after Scott Alexander linked it—evidence for Scott having Power to shape people's attention https://slatestarcodex.com/2020/02/10/autogenderphilia-is-common-and-not-especially-related-to-transgender/ "Rationalism starts with the belief that arguments aren't soldiers, and ends with the belief that soldiers are arguments." The Eliezer Yudkowsky I remember wrote about [how facts are tightly-woven together in the Great Web of Causality](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies), such that [people who are trying to believe something false have an incentive to invent and spread fake epistemology lessons](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology), and about the [high competence threshold that _forces_ correct conclusions](http://sl4.org/archive/0602/13903.html). A culture where there are huge catastrophic consequences for [questioning religion](https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural), is a culture where it's harder to train alignment researchers that genuinely understand Occam's razor on a _deep_ level, when [the intelligent social web](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web) around them will do anything to prevent them from applying the parsimony skill to the God hypothesis. A culture where there are huge catastrophic consequences for questioning gender identity, is a culture where it's harder to train alignment researchers that genuinely understand the hidden-Bayesian-structure-of-language-and-cognition on a _deep_ level, when the social web around them will do anything to prevent them from [invalidating someone's identity](http://unremediatedgender.space/2016/Sep/psychology-is-about-invalidating-peoples-identities/). > First, it is not enough to learn something, and tell the world about it, to get the world to believe it. Not even if you can offer clear and solid evidence, and explain it so well that a child could understand. You need to instead convince each person in your audience that the other people who they see as their key audiences will soon be willing to endorse what you have learned. https://www.overcomingbias.com/2020/12/social-proof-but-of-what.html twenty-one month Category War is as long as it took to write the Sequences https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences I'm worried about the failure mode where bright young minds [lured in](http://benjaminrosshoffman.com/construction-beacons/) by the beautiful propaganda about _systematically correct reasoning_, are instead recruited into what is, effectively, the Eliezer-Yudkowsky-and-Scott-Alexander fan club. > I'm not trying to get Eliezer or "the community" to take a public stance on gender politics; I'm trying to get us to take a stance in favor of the kind of epistemology that we were doing in 2008. It turns out that epistemology has implications for gender politics which are unsafe, but that's more inferential steps, and ... I guess I just don't expect the sort of people who would punish good epistemology to follow the inferential steps? Maybe I'm living in the should-universe a bit here, but I don't think it "should" be hard for Eliezer to publicly say, "Yep, categories aren't arbitrary because you need them to carve reality at the joints in order to make probabilistic inferences, just like I said in 2008; this is obvious." Scott got a lot of pushback just for including the blog that I showed him in a links post (Times have changed! BBL is locally quasi-mainstream after Ozy engaged) It's weird that he thinks telling the truth is politically impossible, because the specific truths I'm focused on are things he _already said_, that anyone could just look up. I guess the point is that the egregore doesn't have the logical or reading comprehension for that?—or rather (a reader points out) the egregore has no reason to care about the past; if you get tagged as an enemy, your past statements will get dug up as evidence of foul present intent, but if you're doing good enough of playing the part today, no one cares what you said in 2009 Somni gets it! https://somnilogical.tumblr.com/post/189782657699/legally-blind E.Y. thinks postrats are emitting "epistemic smog", but the fact that Eigenrobot can retweet my Murray review makes me respect him more than E.Y. https://twitter.com/eigenrobot/status/1397383979720839175 The robot cult is "only" "trying" to trick me into cutting my dick off in the sense that a paperclip maximizer is trying to kill us: an instrumental rather than a terminal value. > the problem with taqiyya is that your sons will believe you https://twitter.com/extradeadjcb/status/1397618177991921667 > I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before. https://www.greaterwrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking/comment/rYKwptdgLgD2dBnHY It would have been better if someone without a dog in the object-level fight could have loudly but disinterestedly said, "What? I don't have a dog in the object-level fight, but we had a whole Sequence about this", but people mostly don't talk if they don't have a dog. But if someone without a dog spoke, then they'd get pattern-matched as a partisan; it _had_ to be me As far as I can tell, Professor, I'm just doing what _you_ taught me—carve reality at the joints, speak the truth, even if your voice trembles, make an extraordinary effort when you've got Something to Protect. "Beliefs about the self aren't special" is part of the whole AI reflectivity thing, too!! > decision-theoretically, it's also not their fault. They were all following a strategy that was perfectly reasonable until they ran into someone with an anomalously high insistence that words should mean things Sure: everyone in a conflict thinks they're acting defensively against aggressors infringing on their rights, because in the cases where everyone agrees what the "actual" property rights are, there's no conflict. typographic attack: https://openai.com/blog/multimodal-neurons/ https://distill.pub/2021/multimodal-neurons/ > These neurons detect gender^10 > Footnote: By this, we mean both that it responds to people presenting as this gender, as well as that it responds to concepts associated with that gender. https://www.jefftk.com/p/an-update-on-gendered-pronouns > Still think this was a perfectly fine tweet btw. Some people afaict were doing the literal ontologically confused thing; seemed like a simple thing to make progress on. Some people wanted to read it as a coded statement despite all my attempts to narrow it, but what can you do. https://twitter.com/ESYudkowsky/status/1356535300986523648 If you were actually HONESTLY tring to narrow it, you would have said, "By the way, this is just about pronouns, I'm not taking a position on whether trans women are women" https://www.gingersoftware.com/content/grammar-rules/adjectives/order-of-adjectives/ https://www.unqualified-reservations.org/2008/01/how-to-actually-defeat-us-government/ > propagate a credible alternate reality that outcompetes the official information network. https://www.unqualified-reservations.org/2007/12/explanation-of-democratic-centrism/ the second generation doesn't "get the joke"; young people don't understand physical strength differences anymore voiceless palato-alveolar fricative, or for words with two letters rather than three. https://en.wikipedia.org/wiki/Voiceless_postalveolar_fricative#Voiceless_palato-alveolar_fricative https://medium.com/@barrakerr/pronouns-are-rohypnol-dbcd1cb9c2d9 I just thought of an interesting argument that almost no one else would (because it requires both prog-sight and NRx-sight) You know the "signaling hazard" (pace Jim) argument against public tolerance of male homosexuality (tolerating gays interferes with normal men expressing affection for each other without being seen as gay, which is bad for unit cohesion, &c.). Until recently, I hadn't thought much of it (because of my prog upbringing)—why do you care if someone isn't sure you're straight? but recent events have made me more sympathetic to its empirical reality—if human nature is such that 140+ IQ ppl actually can't publicly clear up a trivial philosophy-of-language dispute because of the fear of appearing transphobic—well, that's really dumb, but it's the SAME KIND of dumb as "can't express male friendship because of the fear of appearing gay" which suggests a "signaling hazard" argument in favor of political correctness (!!)—we can't tolerate racism, or else Good people would have to incur more costs to signal antiracism (same structure as "we can't tolerate gays, or else normal guys have to incur more costs to signal not-gayness") that's the thing; I read as lefty because I am morally lefty (in contrast to Real Men Who Lift &c.); it's just that I had the "bad luck" of reading everything I could about race and IQ after the James Watson affair in 'aught-seven, and all my leftness is filtered through ten years of living with inconvenient hypotheses [TODO: reorganize to position the question first] [It's been occasionally argued that](https://archive.is/ChqYX) there aren't legitimate grounds to object to using trans people's preferred pronouns, because pronouns aren't facts and don't have truth conditions. Note, this is substantially _stronger_ that the mere claim that you _should_ use preferred pronouns; the claim is that no linguistic expressive power is being sacrificed by doing so. (Whereas in contrast, one might accede to the requested usage out of some combination of politeness, social coercion, and apprehension of [the Schelling point of standard usage](/2019/Oct/self-identity-is-a-schelling-point/), while privately lamenting that it feels analogous to lying.) I think the claim that pronouns don't have truth conditions is _false as a matter of cognitive science_. Humans are _pretty good_ at visually identifying the sex of other humans by integrating cues from various secondary sex characteristics—it's the kind of computer-vision capability that would have been useful in our environment of evolutionary adaptedness. If it _didn't_ work so reliably, we wouldn't have ended up with languages like English where identifying a person's sex is baked into the grammar. And _because_ we ended up with (many) languages that have it baked into the grammar, _departing_ from that conventional usage has cognitive consequences: if someone told you, "Come meet my friend at the mall; she's really cool and you'll like her" and then the friend turned out to be obviously male, you would be _surprised_. The fact that the "she ... her" language [constrained your anticipations](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) so much would seem to immediately falsify the "no truth conditions" claim. [From a certain first-principles perspective](https://www.facebook.com/yudkowsky/posts/10159421750419228), this is _terrible language design_. The grammatical function of pronouns is to have a brief way to refer back to entities already mentioned: it's more user-friendly to be able to say "Katherine put her book on its shelf" rather than "Katherine put Katherine's book on the book's shelf". But then why couple that grammatical function to sex-category membership? You shouldn't _need_ to take a stance on someone's reproductive capabilities to talk about them putting a book on the shelf. If you wanted more pronoun-classes to reduce the probability of collisions (where universal [Spivak _ey_](https://en.wikipedia.org/wiki/Spivak_pronoun) or singular _they_ would result in more frequent need to repeat names where a pronoun would be ambiguous), you could devise some other system that doesn't bake sex into the language, like using initials to form pronouns (Katherine put ker book on its shelf?), or an oral or written analogue of [spatial referencing in American Sign Language](https://www.handspeak.com/learn/index.php?id=27) (where a signer associates a name or description with a direction in space, and points in that direction for subsequent references). (One might speculate that "more classes to reduce collisions" _is_ part of the historical explanation for grammatical gender, in conjunction with the fact that sex is binary and easy to observe. No other salient objective feature quite does the same job: age is continuous rather than categorical; race is also largely continuous [(clinal)](https://en.wikipedia.org/wiki/Cline_(biology)) and historically didn't typically vary within a tribal/community context.) Taking it as a given that English speakers are stuck with gendered third-person singular pronouns, there's still room to debate exactly what _she_ and _he_ map to in cases where a person's "gender" is ambiguous or disputed. (Which comes up more often these days than in the environment where the language evolved.) [TODO: lit search or ask linguistics.stackexchange for literature on what gender/plural/case/&c. distinctions are for? Is it just the collision/ambiuity reduction, or is there something else? Oh, or Anna T./Elena might know] https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters > anything that people are motivated to argue about is not arbitrary. It is being controlled by invisible criteria of evaluation, it has connotations with consequences If Scott Alexander's "The Categories Were Made For Man ..." had never been published, would we still be talking about dolphins and trees in the same way? Nate on dolphins (June 2021)—a dogwhistle?? https://twitter.com/So8res/status/1401670792409014273 Yudkowsky retweeted Nate on dolphins— https://archive.is/Ecsca my rationalist community has people asking a lot of questions already answered by my community's name cite to "Not Especially Related to Transgender" https://twitter.com/fortenforge/status/1402057829142302721