+Humans are _pretty good_ at noticing each other's sex. In one study, subjects were able to descriminate between photographs of female and male faces (hair covered, males clean-shaven) with 96% accuracy.[^face] This even though there's no _single_ facial feature that cleanly distinguishes females and males
+
+[^face]: Vicki Bruce, A. Mike Burton, _et al._, "Sex discrimination: how do we tell the difference between male and female faces?"
+
+----
+
+Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?"
+
+But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_.
+
+If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where the community doesn't exist at all—
+
+-----
+
+I'm avoiding naming anyone in this post even when linking to their public writings, in order to try to keep the _rhetorical emphasis_ on "true tale of personal heartbreak, coupled with sober analysis of the sociopolitical factors leading thereto" even while I'm ... expressing disappointment with people's performance. This isn't supposed to be character/reputational attack on my friends and (former??) heroes—at least, not more than it needs to be. I just _need to tell the story_.
+
+I'd almost rather we all pretend this narrative was written in a ["nearby" Everett branch](https://www.lesswrong.com/posts/9cgBF6BQ2TRB3Hy4E/and-the-winner-is-many-worlds) whose history diverged from ours maybe forty-five years ago—a world almost exactly like our own as far as the macro-scale institutional and ideological forces at play, but with different individual people filling out the relevant birth cohorts. _My_ specific identity doesn't matter; the specific identities of any individuals I mention while telling my story don't matter. What matters is the _structure_: I'm just a sample from the _distribution_ of what happens when an American upper-middle-class high-Openness high-Neuroticism late-1980s-birth-cohort IQ-130 78%-Ashkenazi obligate-autogynephilic boy falls in with this kind of robot cult in this kind of world.
+
+----
+
+_Literally_ all I'm asking for is for the branded systematically-correct-reasoning community to be able to perform _modus ponens_—
+
+ (1) For all nouns _N_, you can't define _N_ any way you want without cognitive consequences [(for at least 37 reasons)](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
+ (2) "Woman" is a noun.
+ (3) _Therefore_, you can't define "woman" any way you want without cognitive consequences.
+
+Note, **(3) is _entirely compatible_ with trans women being women**. The point is that if you want to claim that trans women are women, you need some sort of _argument_ for why that categorization makes sense in the context you want to use the word—why that map usefully reflects some relevant aspect of the territory. If you want to _argue_ that hormone replacement therapy constitutes an effective sex change, or that trans is a brain-intersex condition and the brain is the true referent of "gender", or that [coordination constraints on _shared_ categories](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [support the self-identification criterion](/2019/Oct/self-identity-is-a-schelling-point/), that's fine, because those are _arguments_ that someone who initially disagreed with your categorization could _engage with on the merits_. In contrast, "I can define a word any way I want" can't be engaged with in the same way because it's a denial of the possibility of merits.
+
+------
+
+Here's what I think is going on. _After it's been pointed out_, all the actually-smart people can see that "Useful categories need to 'carve reality at the joints', and there's no reason for gender to magically be an exception to this _general_ law of cognition" is a better argument than "I can define the word 'woman' any way I want." No one is going to newly voice the Stupid Argument now that it's _known_ that I'm hanging around ready to pounce on it.
+
+But the people who have _already_ voiced the Stupid Argument can't afford to reverse themselves, even if they're the sort of _unusually_ epistemically virtuous person who publicly changes their mind on other topics. It's too politically expensive to say, "Oops, that _specific argument_ for why I support transgender people was wrong for trivial technical reasons, but I still support transgender people because ..." because political costs are imposed by a mob that isn't smart enough to understand the concept of "bad argument for a conclusion that could still be true for other reasons." So I can't be allowed to win the debate in public.
+
+The game theorist Thomas Schelling once wrote about the use of clever excuses to help one's negotiating counterparty release themselves from a prior commitment: "One must seek [...] a rationalization by which to deny oneself too great a reward from the opponent's concession, otherwise the concession will not be made."[^schelling]
+
+[^schelling]: _Strategy of Conflict_, Ch. 2, "An Essay on Bargaining"
+
+This is sort of what I was trying to do when soliciting—begging for—engagement-or-endorsement of "Where to Draw the Boundaries?" I thought that it ought to be politically feasible to _just_ get public consensus from Very Important People on the _general_ philosophy-of-language issue, stripped of the politicized context that inspired my interest in it, and complete with math and examples about dolphins and job titles. That _should_ be completely safe. If some would-be troublemaker says, "Hey, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) Thus, the public record about philosophy is corrected without the VIPs having to suffer a social-justice scandal. Everyone wins, right?
+
+But I guess that's not how politics works. Somehow, the mob-punishment mechanisms that aren't smart enough to understand the concept of "bad argument for a true conclusion", _are_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs don't think they can endorse my _correct_ philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies, even though (a) that's _retarded_ (it's possible to agree with someone about a particular philosophy argument, while disagreeing with them about how the philosophy argument applies to a particular object-level case), and (b) I would have _hoped_ that explaining the abstract philosophy problem in the context of dolphins would provide enough plausible deniability to defend against _retarded people_ who want to make everything about politics.
+
+The situation I'm describing is already pretty fucked, but it would be just barely tolerable if the actually-smart people were good enough at coordinating to _privately_ settle philosophy arguments. If someone says to me, "You're right, but I can't admit this in public because it would be too politically-expensive for me," I can't say I'm not _disappointed_, but I can respect that they labor under constraints
+
+[people can't trust me to stably keep secrets]
+
+The Stupid Argument isn't just a philosophy mistake—it's a _socially load-bearing_ philosophy mistake.
+
+And _that_ is intolerable. Once you have a single socially load-bearing philosophy mistake, you don't have a systematically-correct-reasoning community anymore. What you have is a _cult_. If you _notice_ that your alleged systematically-correct-reasoning community has a load-bearing philosophy mistake, and you _go on_ acting as if it were a systematically-correct-reasoning community, then you are committing _fraud_. (Morally speaking. I don't mean a sense of the word "fraud" that could be upheld in a court of law.)
+
+----
+
+[trade arrangments: if that's the world we live in, fine]
+
+------
+
+[happy price, symmetry-breaking]
+
+As I've observed, being famous must _suck_.
+
+-----
+
+https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/
+
+The Popular Author
+
+"People started threatening to use my bad reputation to discredit the communities I was in and the causes I cared about most."
+
+[lightning post assumes invicibility]
+
+The Popular Author definitely isn't trying to be cult leader. He just
+
+----
+
+The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [the](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/) Popular Author—you _actually know the math_.
+
+If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, I can use the observation of its shape to update my beliefs about its blegg-category-membership, and then use my beliefs about category-membership to update my beliefs about its blueness. This cognitive algorithm is useful if we live in a world where objects that have the appropriate statistical structure—if the joint distribution P(blegg, blueness, eggness) approximately factorizes as P(blegg)·P(blueness|blegg)·P(eggness|blegg).
+
+"Category boundaries" are just a _visual metaphor_ for the math: the set of things I'll classify as a blegg with probability greater than _p_ is conveniently _visualized_ as an area with a boundary in blueness–eggness space. If you _don't understand_ the relevant math and philosophy—or are pretending not to understand only and exactly when it's politically convenient—you might think you can redraw the boundary any way you want. But you can't, because the "boundary" visualization is _derived from_ a statistical model which corresponds to _empirically testable predictions about the real world_.
+
+Fucking with category boundaries corresponds to fucking with the model, which corresponds to fucking with your ability to interpret sensory data. The only two reasons you could _possibly_ want to do this would be to wirehead yourself (corrupt your map to make the territory look nicer than it really is, making yourself _feel_ happier at the cost of sabotaging your ability to navigate the real world) or as information warfare (corrupt shared maps to sabotage other agents' ability to navigate the real world, in a way such that you benefit from their confusion).
+
+-----
+
+[psychological unity of humankind and sex]
+https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind
+
+----
+
+[ppl don't click links—quick case for AGP—80% is not 100, but]
+
+-----
+
+["delusional perverts", no one understands me]