-https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy
-https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water
-
-
-]
-
-[TODO: sentences about studies showing that HRT doesn't erase male advantage
-https://twitter.com/FondOfBeetles/status/1368176581965930501
-https://link.springer.com/article/10.1007/s40279-020-01389-3
-https://bjsm.bmj.com/content/55/15/865
-]
-
-[TODO sentences about Lia Thomas and Cece Tefler https://twitter.com/FondOfBeetles/status/1466044767561830405 (Thomas and Tefler's feats occured after Yudkowsky's 2018 Tweets, but this kind of thing was easily predictable to anyone familiar with sex differences—cite South Park)
-https://www.dailymail.co.uk/news/article-10445679/Lia-Thomas-UPenn-teammate-says-trans-swimmer-doesnt-cover-genitals-locker-room.html
-https://twitter.com/sharrond62/status/1495802345380356103 Lia Thomas event coverage
-https://www.realityslaststand.com/p/weekly-recap-lia-thomas-birth-certificates Zippy inv. cluster graph!
-https://www.swimmingworldmagazine.com/news/a-look-at-the-numbers-and-times-no-denying-the-advantages-of-lia-thomas/
-]
-
-
-
-Writing out this criticism now, the situation doesn't feel _confusing_, anymore. Yudkowsky was very obviously being intellectually dishonest in response to very obvious political incentives. That's a thing that public intellectuals do. And, again, I agree that the distinction between facts and policy decisions _is_ a valid one, even if I thought it was being selectively invoked here as an [isolated demand for rigor](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) because of the political context. Coming from _anyone else in the world_, I would have considered the thread fine—a solidly above-average performance, really. I wouldn't have felt confused or betrayed at all. Coming from Eliezer Yudkowsky, it was—confusing.
-
-Because of my hero worship, "he's being intellectually dishonest in response to very obvious political incentives" wasn't in my hypothesis space; I _had_ to assume the thread was an "honest mistake" in his rationality lessons, rather than (what it actually was, what it _obviously_ actually was) hostile political action.
-
-
-
-
-
-
-
-
-
-, then you're knowably, predictably making your _readers_ that much stupider.
-
-, which has negative consequences for his "advancing the art of human rationality" project, even if you haven't said anything false—particularly because people look up to you as the one who taught them to aspire to a _[higher](https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger) [standard](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible)_ [than](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [merely not-lying](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly).
-
-
-
-
-
-
-
-Similarly with categories in general, and sex (or "gender") categorization in particular. It's true that the same word can be used in many ways depending on context. But you're _not done_ dissolving the question just by making that observation. And the one who triumphantly shouts in the public square,
-
-
-
-
-An illustrative example: like many gender-dysphoric males, I cosplay female characters at conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm *not very good at it*. I think someone looking at [my cosplay photos](https://www.facebook.com/zmdavis/media_set?set=a.10155131901020199&type=3) and trying to describe their content in clear language—not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory—would say something like, "This is a photo of a man and he's wearing a dress." The word *man* in that sentence is expressing *cognitive work*: it's a summary of the [lawful cause-and-effect evidential entanglement](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally-observable secondary sex characteristics (facial structure, beard shadow, *&c.*), from which evidence an agent using an [efficient naïve-Bayes-like model](http://lesswrong.com/lw/o8/conditional_independence_and_naive_bayes/) can assign me to its "man" category and thereby make probabilistic predictions about some of my traits that aren't directly observable from the photo, and achieve a better [score on those predictions](http://yudkowsky.net/rational/technical/) than if the agent had assigned me to its "woman" category, where by "traits" I mean not *just* chromosomes ([as you suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the *conjunction* of chromosomes *and* reproductive organs _and_ muscle mass (sex difference effect size of [Cohen's *d*](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d)≈2.6) *and* Big Five Agreeableness (*d*≈0.5) *and* Big Five Neuroticism (*d*≈0.4) *and* short-term memory (*d*≈0.2, favoring women) *and* white-to-gray-matter ratios in the brain *and* probable socialization history *and* [lots of other things](https://en.wikipedia.org/wiki/Sex_differences_in_human_physiology)—including differences we might not necessarily currently know about, but have prior reasons to suspect exist: no one _knew_ about sex chromosomes before 1905, but given all the other systematic differences between women and men, it would have been a reasonable guess (that turned out to be correct!) to suspect the existence of some sort of molecular mechanism of sex determination.
-
-Making someone say "trans woman" instead of "man" in that sentence depending on my verbally self-reported self-identity may not be forcing them to *lie*. But it *is* forcing them to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "men" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure ("trans women", two words, are presumably a subcluster within the "women" cluster). This encoding might not confuse a well-designed AI into making any bad predictions, but [as you explained very clearly, it probably will confuse humans](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences):
-
-> You can see this in terms of similarity clusters: once you draw a boundary around a group, the mind starts trying to harvest similarities from the group. And unfortunately the human pattern-detectors seem to operate in such overdrive that we see patterns whether they're there or not; a weakly negative correlation can be mistaken for a strong positive one with a bit of selective memory.
-
-(I _want_ to confidently predict that everything I've just said is completely obvious to you, because I learned it all specifically from you! A 130 IQ _nobody_ like me shouldn't have to say _any_ of this to the _author_ of "A Human's Guide to Words"! But then I don't know how to reconcile that with your recent public statement about [not seeing "how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232). Hence this desperate and [_confused_](https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist) email plea.)