+https://nostalgebraist.tumblr.com/post/686455476984119296/eliezer-yudkowsky-seems-really-depressed-these
+
+> So now my definitely-not-Kelthorkarni have weird mental inhibitions against actually listening to me, even when I clearly do know much better than they do. In retrospect I think I was guarding against entirely the wrong failure modes. The problem is not that they're too conformist, it's that they don't understand how to be defiant without diving heedlessly into the seas of entropy. It's plausible I should've just gone full Kelthorkarnen
+https://www.glowfic.com/replies/1614129#reply-1614129
+
+I was pleading to him in his capacity as rationality leader, not AGI alignment leader; I know I have no business talking about the latter
+
+(As an aside, it's actually kind of _hilarious_ how far Yudkowsky's "rationalist" movement has succeeded at winning status and mindshare in a Society whose [_de facto_ state religion](https://slatestarcodex.com/2019/07/08/gay-rites-are-civil-rites/) is [founded on eliminating "discrimination."](https://richardhanania.substack.com/p/woke-institutions-is-just-civil-rights) Did—did anyone besides me "get the joke"? I would have expected _Yudkowsky_ to get the joke, but I guess not??)
+
+[TODO: misrepresentation of the Light: Dath ilan has a concept of "the Light"—the vector in policyspace perpendicular outwards from the Pareto curve, in which everyone's interests coincide.]
+
+"You're allowed to talk to me," he said at the Independence Day party
+
+MIRI made a point of prosecuting Tyler Altman rather than participating (even if it was embarrassing to be embezzled from) because of game theory, but it sees folding to social-justice bullies as inevitable
+
+re Yudkowsky not understanding the "That's So Gender" sense, I suspect this is better modeled as a nearest-unblocked-strategy alignment problem, rather than a capabilities problem ("doesn't comprehend"). Author has a Vision of a Reality; that Reality conflicts with the ideology of the readership, who complain; Author issues a patch that addresses the surface of the complaint without acknowledging the conflict of Visions, because describing the conflict in too much detail would be construed as aggression
+
+------
+
+Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—[not just as an obligatory profession of humility, but _actually_ wrong in the real world](https://www.lesswrong.com/posts/GrDqnMjhqoxiqpQPw/the-proper-use-of-humility). If my fellow rationalists weren't sold on the autogynephilia and transgender thing, I might be a bit disappointed, but it's definitely not grounds to denounce the entire community as a failure or a fraud.
+
+
+A striking pattern from my attempts to argue with people about the two-type taxonomy was the tendency for the conversation to get derailed on some variation of "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a 2014 post by Scott Alexander arguing that because categories exist in our model of the world rather than the world itself, there's nothing wrong with simply _defining_ trans people to be their preferred gender, in order to alleviate their dysphoria.
+
+[TODO:
+Email to Scott at 0330 a.m.
+> In the last hour of the world before this is over, as the nanobots start consuming my flesh, I try to distract myself from the pain by reflecting on what single blog post is most responsible for the end of the world. And the answer is obvious: "The Categories Were Made for the Man, Not Man for the Categories." That thing is a fucking Absolute Denial Macro!
+]
+
+This ... really wasn't what I was trying to talk about. _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory of psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words.
+
+
+But this "I can define the word _woman_ any way I want" mind game? _That_ part was _absolutely_ clear-cut. That part of the argument, I knew I could win.
+
+To be clear, it's _true_ that categories exist in our model of the world, rather than the world itself—the "map", not the "territory"—and it's true that trans women might be women _with respect to_ some genuinely useful definition of the word "woman." However, the Scott Alexander piece that people kept linking to me goes further, claiming that we can redefine gender categories _in order to make trans people feel better_:
+
+> I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.
+
+But this is just wrong. Categories exist in our model of the world _in order to_ capture empirical regularities in the world itself: the map is supposed to _reflect_ the territory, and there _are_ "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight. Alexander cites [a post](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside) from that Sequence in support of the (true) point about how categories are "in the map" ... but if you actually read the Sequence, another point that Yudkowsky pounds home _over and over and over again_, is that word and category definitions are nevertheless _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"—
+
+
+Importantly, this is a very general point about how language itself works _that has nothing to do with gender_. No matter what you believe about politically controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if [positive consequence]" is not correct philosophy, _independently of the particular values of X and Y_.