+I wasn't ready to hear it then, but—I mean, probably not? So, for the _most_ part, all humans are extremely similar: [as Yudkowsky would soon write about](https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind) [(following Leda Cosmides and John Tooby)](https://www.cep.ucsb.edu/papers/pfc92.pdf), complex functional adaptations have to be species-universal in order to not get scrambled during meiosis. As a toy example, if some organelle gets assembled from ten genes, those ten alleles _all_ have to be nearly universal in the population—if each only had a frequency of 0.9, then the probability of getting them all right would only be 0.9<sup>10</sup> ≈ 0.349. If allele H [epistatically](https://en.wikipedia.org/wiki/Epistasis) only confers a fitness advantage when allele G at some other locus is already present, then G has to already be well on its way to fixation in order for there to be appreciable selective pressure for H. Evolution, feeding on variation, uses it up. Complicated functionality that requires multiple genes working in concert can only accrete gradually as each individual piece reaches fixation in the entire population, resulting in an intricate species-universal _design_: just about everyone has 206 bones, two lungs, a liver, a visual cortex, _&c_.
+
+In this way (contrary to the uninformed suspicions of those still faithful to the blank slate), evolutionary psychology actually turns out to be impressively antiracist discipline: maybe individual humans can differ in small ways like personality, or [ancestry-groups in small ways](/2020/Apr/book-review-human-diversity/#ancestries) like skin color, but these are, and _have_ to be, "shallow" low-complexity variations on the same basic human design; new _complex_ functionality would require speciation.
+
+This luck does not extend to antisexism. If the genome were a computer program, it would have `if female { /* ... */ } else if male { /* ... */ }` conditional blocks, and inside those blocks, you can have complex sex-specific functionality. By default, selection pressures on one sex tend to drag the other along for the ride—men have nipples because there's no particular reason for them not to—but in those cases where it was advantageous in the environment of evolutionary adaptedness for females and males to do things _differently_, sexual dimorphism can evolve (slowly—[more than one and half orders of magnitude slower than monomorphic adaptations](/papers/rogers-mukherjee-quantitative_genetics_of_sexual_dimorphism.pdf), in fact).
+
+The evolutionary theorist Robert Trivers wrote, "One can, in effect, treat the sexes as if they were different species, the opposite sex being a resource relevant to producing maximum surviving offspring" (!). There actually isn't one species-universal design—it's _two_ designs.
+
+If you're willing to admit to the possibility of psychological sex differences _at all_, you have to admit that sex differences in the parts of the mind that are _specifically about mating_ are going to be a prime candidate. (But by no means the only one—different means of reproduction have different implications for [life-history strategies](https://en.wikipedia.org/wiki/Life_history_theory) far beyond the act of mating itself.) Even if there's a lot of "shared code" in how love-and-attachment works in general, there are also going to be specific differences that were [optimized for](https://www.lesswrong.com/posts/8vpf46nLMDYPC6wA4/optimization-and-the-intelligence-explosion) facilitating males impregnating females. In that sense, the claim that "the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other" just seems commonsensically _true_.
+
+I guess if you _didn't_ grow up with a quasi-religious fervor for psychological sex differences denialism, this whole theoretical line of argument about evolutionary psychology doesn't seem world-shatteringly impactful?—maybe it just looks like supplementary Science Details brushed over some basic facts of human existence that everyone knows. But if you _have_ built your identity around [quasi-religious _denial_](/2020/Apr/peering-through-reverent-fingers/) of certain basic facts of human existence that everyone knows (if not everyone [knows that they know](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief)), getting forced out of it by sufficient weight of Science Details [can be a pretty rough experience](https://www.greaterwrong.com/posts/XM9SwdBGn8ATf8kq3/c/comment/Zv5mrMThBkkjDAqv9).
+
+My hair-trigger antisexism was sort of lurking in the background of some of my comments while the Sequences were being published (though, again, it wasn't relevant to _most_ posts, which were just about cool math and science stuff that had no avenue whatsoever for being corrupted by gender politics). The term "social justice warrior" wasn't yet popular, but I definitely had a SJW-alike mindset (nurtured from my time lurking the feminist blogosphere) of being preoccupied with the badness and wrongness of people who are wrong and bad (_i.e._, sexist), rather than trying to maximize the accuracy of my probabilistic predictions.
+
+Another one of the little song-fragments I wrote in my head a few years earlier (which I mention for its being representative of my attitude at the time, rather than it being notable in itself), concerned an advice columnist, [Amy Alkon](http://www.advicegoddess.com/), syndicated in the _Contra Costa Times_ of my youth, who would sometimes give dating advice based on a pop-evopsych account of psychological sex differences—the usual fare about women seeking commitment and men seeking youth and beauty. My song went—
+
+> _I hope Amy Alkon dies tonight
+> So she can't give her bad advice
+> No love or value save for evolutionary psych_
+>
+> _I hope Amy Alkon dies tonight
+> Because the world's not girls and guys
+> Cave men and women fucking 'round the fire in the night_
+
+Looking back with the outlook later acquired from my robot cult, this is _abhorrent_. You don't _casually wish death_ on someone just because you disagree with their views on psychology! (Also, casually wishing death on a woman for her views does not seem particularly pro-feminist?!) Even if it wasn't in a spirit of personal malice (this was a song I sung to myself, not an actual threat directed to Amy Alkon's inbox), the sentiment just _isn't done_. But at the time, I _didn't notice there was anything wrong with my song_. I hadn't yet been socialized into the refined ethos of "False ideas should be argued with, but heed that we too may have ideas that are false".
+
+[TODO Vassar anecdote]
+
+Sex differences would come up a couple more times in one of the last Sequences, on ["Fun Theory"](https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence)—speculations on how life could be truly _good_ if the world were superintelligently optimized for human values, in contrast to the cruelty and tragedy of our precarious existence [in a world shaped only by blind evolutionary forces](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god).
+
+According to Yudkowsky, one of the ways in which people's thinking about artificial intelligence usually goes wrong is [anthropomorphism](https://www.lesswrong.com/posts/RcZeZt8cPk48xxiQ8/anthropomorphic-optimism)—expecting arbitrary AIs to behave like humans, when really "AI" corresponds to [a much larger space of algorithms](https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general). As a social animal, predicting other humans is one of the things we've evolved to be good at, and the way that works is probably via "empathic inference": [I predict your behavior by imagining what _I_ would do in your situation](https://www.lesswrong.com/posts/Zkzzjg3h7hW5Z36hK/humans-in-funny-suits). Since all humans are very similar, [this appeal-to-black-box](https://www.lesswrong.com/posts/9fpWoXpNv83BAHJdc/the-comedy-of-behaviorism) works pretty well in our lives (though it won't work on AI). And from this empathy, evolution also coughed up the [moral miracle](https://www.lesswrong.com/posts/pGvyqAQw6yqTjpKf4/the-gift-we-give-to-tomorrow) of [_sympathy_, intrinsically caring about what others feel](https://www.lesswrong.com/posts/NLMo5FZWFFq652MNe/sympathetic-minds).
+
+In ["Interpersonal Entanglement"](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), Yudkowsky appeals to the complex moral value of sympathy as an argument against the desirability of nonsentient sex partners (_catgirls_ being the technical term). Being emotionally intertwined with another actual person is one of the things that makes life valuable, that would be lost if people just had their needs met by soulless catgirl holodeck characters.
+
+But there's a problem, Yudkowsky argues: women and men aren't designed to make each other optimally happy. If I may put a pseudo-mathy poetic gloss on it: the abstract game between the two human life-history strategies in the environment of evolutionary adaptedness had a conflicting-interests as well as a shared-interests component, and human psychology still bears the design signature of that game denominated in inclusive fitness, even though [no one cares about inclusive fitness](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers). (Peter Watts: ["And God smiled, for Its commandment had put Sperm and Egg at war with each other, even unto the day they made themselves obsolete."](https://www.rifters.com/real/Blindsight.htm)) The scenario of Total Victory for the ♂ player in the conflicting-interests subgame is not [Nash](https://en.wikipedia.org/wiki/Nash_equilibrium). The design of the entity who _optimally_ satisfied what men want out of women would not be, and _could_ not be, within the design parameters of actual women.
+
+(And _vice versa_ and respectively, but in case you didn't notice, this blog post is all about male needs.)
+
+Yudkowsky dramatized the implications in a short story, ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2), portraying an almost-[aligned](https://arbital.com/p/ai_alignment/) superintelligence constructing a happiness-maximizing utopia for humans—except that because of the mismatch in the sexes' desires, and because the AI is prohibited from editing people's minds, the happiness-maximizing solution (according to the story) turns out to be splitting up the human species by sex and giving women and men their own _separate_ utopias (on [Venus and Mars](https://en.wikipedia.org/wiki/Gender_symbol#Origins), ha ha), complete with artificially-synthesized romantic partners.
+
+Of course no one _wants_ that—our male protagonist doesn't _want_ to abandon his wife and daughter for some catgirl-adjacent (if conscious) hussy. But humans _do_ adapt to loss; if the separation were already accomplished by force, people would eventually move on, and post-separation life with companions superintelligently optimized _for you_ would ([_arguendo_](https://en.wikipedia.org/wiki/Arguendo)) be happier than life with your real friends and family, whose goals will sometimes come into conflict with yours because they weren't superintelligently designed _for you_.
+
+The alignment-theory morals are those of [unforeseen maxima](https://arbital.greaterwrong.com/p/unforeseen_maximum) and [edge instantiation](https://arbital.greaterwrong.com/p/edge_instantiation). An AI designed to maximize happiness would kill all humans and tile the galaxy with maximally-efficient happiness-brainware. If this sounds "crazy" to you, that's the problem with anthropomorphism I was telling you about: [don't imagine "AI" as an emotionally-repressed human](https://www.lesswrong.com/posts/zrGzan92SxP27LWP9/points-of-departure), just think about [a machine that calculates what actions would result in what outcomes](https://web.archive.org/web/20071013171416/http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/), and does the action that would result in the outcome that maximizes some function. It turns out that picking a function that doesn't kill everyone looks hard. Just tacking on the constraints that you can think of (like making the _existing_ humans happy without tampering with their minds) [will tend to produce similar "crazy" outcomes that you didn't think to exclude](https://arbital.greaterwrong.com/p/nearest_unblocked).
+
+At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at "Failed Utopia #4-2" in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back a dozen years later—[or even four years later](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/D34jhYBcaoE7DEb8d)—my performative horror was missing the point.
+
+_The argument makes sense_. Of course, it's important to notice that you'd need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI in the story doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other. A faithful antisexist (as I was) might insist that that should be the _only_ moral, as it implies the other [_a fortiori_](https://en.wikipedia.org/wiki/Argumentum_a_fortiori). But if you're trying to _learn about reality_ rather than protect your fixed quasi-religious beliefs, it should be _okay_ for one of the lessons to get a punchy sci-fi short story; it should be _okay_ to think about the hyperplane between two coarse clusters, even while it's simultaneously true that you could build a wall around every individual point, without deigning to acknowledge the existence of clusters.
+
+On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_ (presumably after [the Norse deity that determines men's fates](https://en.wikipedia.org/wiki/Ver%C3%B0andi)), rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but since the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story.
+
+Another post in this vein that had a huge impact on me was ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). As an illustration of how [the hope for radical human enhancement is fraught with](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard) technical difficulties, Yudkowsky sketches a picture of just how difficult an actual male-to-female sex change would be.
+
+It would be hard to overstate how much of an impact this post had on me. I've previously linked it on [this](/2016/Nov/reply-to-ozy-on-two-type-mtf-taxonomy/#changing-emotions-link) [blog](/2017/Jan/the-line-in-the-sand-or-my-slippery-slope-anchoring-action-plan/#changing-emotions-link) [five](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/#changing-emotions-link) [different](/2018/Dec/untitled-metablogging-26-december-2018/#changing-emotions-link) [times](/2019/Aug/the-social-construction-of-reality-and-the-sheer-goddamned-pointlessness-of-reason/#changing-emotions-link). In June 2008, half a year before it was published, I encountered the [2004 Extropians mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that the blog post had clearly been revised from. (The fact that I was trawling through old mailing list archives searching for Yudkowsky content that I hadn't already read, tells you something about what a fanboy I am—if, um, you hadn't already noticed.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?"
+
+The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything Just Work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned—
+
+> Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.
+>
+> [...]
+>
+> So to actually _become female_ ...
+>
+> We're talking about a _massive_ transformation here, billions of neurons and trillions of synapses rearranged. Not just form, but content—just like a male judo expert would need skills repatterned to become a female judo expert, so too, you know how to operate a male brain but not a female brain. You are the equivalent of a judo expert at one, but not the other. You have _cognitive_ reflexes, and consciously learned cognitive skills as well.
+>
+> [...]
+>
+> What happens when, as a woman, you think back to your memory of looking at Angelina Jolie photos as a man? How do you _empathize_ with your _past self_ of the opposite sex? Do you flee in horror from the person you were? Are all your life's memories distant and alien things? How can you _remember_, when your memory is a recorded activation pattern for neural circuits that no longer exist in their old forms? Do we rewrite all your memories, too?
+
+But, well ... I mean, um ...
+
+(I still really don't want to be blogging about this, but _somebody has to and no one else will_)
+
+From the standpoint of my secret erotic fantasy, "normal, masculine man wearing a female body like a suit of clothing" is actually a _great_ outcome—the _ideal_ outcome. Let me explain.
+
+The main plot of my secret erotic fantasy accommodates many frame stories, but I tend to prefer those that invoke the [literary genre of science](https://www.lesswrong.com/posts/4Bwr6s9dofvqPWakn/science-as-attire), and posit "technology" rather than "spells" or "potions" as the agent of transformation, even if it's all ultimately magic (where ["magic" is a term of art for anything you don't understand how to implement as a computer program](https://www.lesswrong.com/posts/kpRSCH7ALLcb6ucWM/say-not-complexity)).
+
+So imagine having something like [the transporter in _Star Trek_](https://memory-alpha.fandom.com/wiki/Transporter), but you re-materialize with the body of someone else, rather than your original body—a little booth I could walk in, dissolve in a tingly glowy special effect for a few seconds, and walk out looking like (say) [Nana Visitor (circa 1998)](https://memory-alpha.fandom.com/wiki/Kay_Eaton?file=Kay_Eaton.jpg). (In the folklore of [female-transformation erotica](/2016/Oct/exactly-what-it-says-on-the-tin/), this machine is often called the ["morphic adaptation unit"](https://www.cyoc.net/interactives/chapter_115321.html).)
+
+As "Changing Emotions" points out, this high-level description of a hypothetical fantasy technology leaves many details unspecified—not just the _how_, but the _what_. What would the indistinguishable-from-magical transformation booth do to my brain? [As a preference-revealing thought experiment](https://www.lesswrong.com/posts/DdEKcS6JcW7ordZqQ/not-taking-over-the-world), what would I _want_ it to do, if I can't change [the basic nature of reality](https://www.lesswrong.com/posts/tPqQdLCuxanjhoaNs/reductionism), but if engineering practicalities weren't a constraint? (That is, I'm allowed to posit any atom-configuration without having to worry about how you would get all the atoms in the right place, but I'm not allowed to posit tethering my immortal soul to a new body, because [souls](https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural) [aren't](https://www.lesswrong.com/posts/7Au7kvRAPREm3ADcK/psychic-powers) [real](https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies).)
+
+The anti-plug-and-play argument makes me confident that it would have to change _something_ about my mind in order to integrate it with a new female body—if nothing else, my unmodified brain doesn't physically _fit_ inside Nana Visitor's skull. ([One meta-analysis puts the sex difference in intracranial volume and brain volume at](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3969295/) a gaping [Cohen's _d_](/2019/Sep/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences/) ≈ 3.0 and 2.1, respectively, and Visitor doesn't look like she has an unusually large head.)
+
+Fine—we're assuming that difficulty away and stipulating that the magical transformation booth can make the _minimal_ changes necessary to put my brain in a female body, and have it fit, and have all the motor-connection/body-mapping stuff line up so that I can move and talk normally in a body that feels like mine, without being paralyzed or needing months of physical therapy to re-learn how to walk.
+
+I want this more than I can say. But is that _all_ I want? What about all the _other_ sex differences in the brain? Male brains are more lateralized—doing [relatively more communication within hemispheres rather than between](https://www.pnas.org/content/111/2/823); there are language tasks that women and men perform equally well on, but [men's brains use only the _left_ inferior frontal gyrus, whereas women's use both](/papers/shaywitz-et_al-sex_differences_in_the_functional_organization_of_the_brain_for_language.pdf). Women have a relatively thicker corpus callosum; men have a relatively larger amygdala. Fetal testosterone levels [increase the amount of gray matter in posterior lateral orbitofrontal cortex, but decrease the gray matter in Wernicke's area](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3306238/) ...
+
+Do I want the magical transformation technology to fix all that, too?
+
+Do I have _any idea_ what it would even _mean_ to fix all that, without spending multiple lifetimes studying neuroscience?
+
+I think I have just enough language to _start_ to talk about what it would mean. Since sex isn't an atomic attribute, but rather a high-level statistical regularity such that almost everyone can be cleanly classified as "female" or "male" _in terms of_ lower-level traits (genitals, hormone levels, _&c._), then, abstractly, we're trying to take points from male distribution and map them onto the female distribution in a way that preserves as much structure (personal identity) as possible. My female analogue doesn't have a penis like me (because then she wouldn't be female), but she is going to speak American English like me and be [85% Ashkenazi like me](/images/ancestry_report.png), because language and autosomal genes don't have anything to do with sex.
+
+The hard part has to do with traits that are meaningfully sexually dimorphic, but not as a discrete dichotomy—where the sex-specific universal designs differ in ways that are _subtler_ than the presence or absence of entire reproductive organs. (Yes, I know about [homology](https://en.wikipedia.org/wiki/Homology_(biology))—and _you_ know what I meant.) We are _not_ satisfied if the magical transformation technology swaps out my penis and testicles for a functioning female reproductive system without changing the rest of my body, because we want the end result to be indistinguishable from having been drawn from the female distribution (at least, indistinguishable _modulo_ having my memories of life as a male before the magical transformation), and a man-who-somehow-magically-has-a-vagina doesn't qualify.
+
+The "obvious" way to to do the mapping is to keep the same percentile rank within each trait (given some suitably exhaustive parsing and factorization of the human design into individual "traits"), but take it with respect to the target sex's distribution. I'm 5′11″ tall, which [puts me at](https://dqydj.com/height-percentile-calculator-for-men-and-women/) the 73rd percentile for American men, about 6/10ths of a standard deviation above the mean. So _presumably_ we want to say that my female analogue is at the 73rd percentile for American women, about 5′5½″.
+
+You might think this is "unfair": some women—about 7 per 1000—are 5′11″, and we don't want to say they're somehow _less female_ on that account, so why can't I keep my height? The problem is that if we refuse to adjust for every trait for which the female and male distributions overlap (on the grounds that _some_ women have the same trait value as my male self), we don't end up with a result from the female distribution.
+
+The typical point in a high-dimensional distribution is _not_ typical along each dimension individually. [In 100 flips of a biased coin](http://zackmdavis.net/blog/2019/05/the-typical-set/) that lands Heads 0.6 of the time, the _single_ most likely sequence is 100 Heads, but there's only one of those and you're _vanishingly_ unlikely to actually see it. The [sequences you'll actually observe will have close to 60 Heads](https://en.wikipedia.org/wiki/Asymptotic_equipartition_property). Each such sequence is individually less probable than the all-Heads sequence, but there are vastly more of them. Similarly, [most of the probability-mass of a high-dimensional multivariate normal distribution is concentrated in a thin "shell" some distance away from the mode](https://www.johndcook.com/blog/2011/09/01/multivariate-normal-shell/), for the same reason. (The _same_ reason: the binomial distribution converges to the normal in the limit of large _n_.)
+
+Statistical sex differences are like flipping two different collections of coins with different biases, where the coins represent various traits. Learning the outcome of any individual flip, doesn't tell you which set that coin came from, but [if we look at the aggregation of many flips, we can get _godlike_ confidence](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy-1) as to which collection we're looking at.
+
+A single-variable measurement like height is like a single coin: unless the coin is _very_ biased, one flip can't tell you much about the bias. But there are lots of things about people for which it's not that they can't be measured, but that the measurements require _more than one number_—which correspondingly offer more information about the distribution generating them.
+
+And knowledge about the distribution is genuinely informative. Occasionally you hear progressive-minded people dismiss and disdain simpleminded transphobes who believe that chromosomes determine sex, when actually, most people haven't been karyotyped and don't _know_ what chromosomes they have. (Um, with respect to some sense of "know" that doesn't care how unsurprised I was that my 23andMe results came back with a _Y_ and would have bet on it at very generous odds.)
+
+Certainly, I agree that almost no one interacts with sex chromosomes on a day-to-day basis; no one even knew that sex chromosomes _existed_ before 1905. [(Co-discovered by a woman!)](https://en.wikipedia.org/wiki/Nettie_Stevens) But the function of [intensional definitions](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) in human natural language isn't to exhaustively [pinpoint](https://www.lesswrong.com/posts/3FoMuCLqZggTxoC3S/logical-pinpointing) a concept in the detail it would be implemented in an AI's executing code, but rather to provide a "treasure map" sufficient for a listener to pick out the corresponding concept in their own world-model: that's why [Diogenes exhibiting a plucked chicken in response to Plato's definition of a human as a "featherless biped"](https://www.lesswrong.com/posts/jMTbQj9XB5ah2maup/similarity-clusters) seems like a cheap "gotcha"—we all instantly know that's not what Plato meant. ["The challenge is figuring out which things are similar to each other—which things are clustered together—and sometimes, which things have a common cause."](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) But sex chromosomes, and to a large extent specifically the [SRY gene](https://en.wikipedia.org/wiki/Testis-determining_factor) located on the Y chromosome, _are_ such a common cause—the root of the [causal graph](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) underlying all _other_ sex differences. A smart natural philosopher living _before_ 1905, knowing about all the various observed differences between women and men, might have guessed at the existence of some molecular mechanism of sex determination, and been _right_. By the "treasure map" standard, "XX is female; XY is male" is a pretty _well-performing_ definition—if you're looking for a [_simple_ membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) that's entangled with a lot of information about the many intricate ways in which females and males statistically differ.
+
+Take faces. People are [verifiably very good at recognizing sex from (hair covered, males clean-shaven) photographs of people's faces](/papers/bruce_et_al-sex_discrimination_how_do_we_tell.pdf) (96% accuracy, which is the equivalent of _d_ ≈ 3.5), but we don't have direct introspective access into what _specific_ features our brains are using to do it; we just look, and _somehow_ know. The differences are real [(a computer statistical model gets up to 99.47% accuracy)](https://royalsocietypublishing.org/doi/10.1098/rspb.2015.1351#d3e949), but it's not a matter of any single, simple measurement you could perform with a ruler (like the distance between someone's eyes). Rather, it's a high-dimensional _pattern_ in many such measurements you could take with a ruler, no one of which is definitive. [Covering up the nose makes people slower and slightly worse at sexing faces, but people don't do better than chance at guessing sex from photos of noses alone](/papers/roberts-bruce-feature_saliency_in_judging_the_sex_and_familiarity_of_faces.pdf).
+
+Notably, for _images_ of faces, we actually _do_ have transformation technology! (Not "magical", because we know how it works.) AI techniques like [generative adversarial networks](https://arxiv.org/abs/1812.04948) and [autoencoders](https://towardsdatascience.com/generating-images-with-autoencoders-77fd3a8dd368) can learn the structure of the distribution of facial photographs, and use that knowledge to synthesize faces from scratch (as demonstrated by [_thispersondoesnotexist.com_](https://thispersondoesnotexist.com/))—or [do things like](https://arxiv.org/abs/1907.10786) sex transformation (as demonstrated by [FaceApp](https://www.faceapp.com/), the _uniquely best piece of software in the world_).
+
+If you let each pixel vary independently, the space of possible 1024x1024 images is 1,048,576-dimensional, but the vast hypermajority of those images aren't photorealistic human faces. Letting each pixel vary independently is the wrong way to think about it: changing the lighting or pose changes a lot of pixels in what humans would regard as images of "the same" face. So instead, our machine-learning algorithms learn a [compressed](https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len) representation of what makes the tiny subspace (relative to images-in-general) of faces-in-particular similar to each other. That [latent space](https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d) is a lot smaller (say, 512 dimensions), but still rich enough to embed the high-level distinctions that humans notice: [you can find a hyperplane that separates](https://youtu.be/dCKbRCUyop8?t=1433) smiling from non-smiling faces, or glasses from no-glasses, or young from old, or different races—or female and male. Sliding along the [normal vector](https://en.wikipedia.org/wiki/Normal_(geometry)) to that [hyperplane](https://en.wikipedia.org/wiki/Hyperplane) gives the desired transformation: producing images that are "more female" (as the model has learned that concept) while keeping "everything else" the same.
+
+Two-dimensional _images_ of people are _vastly_ simpler than the actual people themselves in the real physical universe. But _in theory_, a lot of the same _mathematical principles_ would apply to hypothetical future nanotechnology-wielding AI systems that could, like the AI in "Failed Utopia #4-2", synthesize a human being from scratch (this-person-_didn't_-exist-dot-com?), or do a real-world sex transformation (PersonApp?)—and the same statistical morals apply to reasoning about sex differences in psychology and (which is to say) the brain.
+
+Daphna Joel _et al._ [argue](https://www.pnas.org/content/112/50/15468) [that](https://www.pnas.org/content/112/50/15468) human brains are "unique 'mosaics' of features" that cannot be categorized into distinct _female_ and _male_ classes, because it's rare for brains to be "internally consistent"—female-typical or male-typical along _every_ dimension. It's true and important that brains aren't _discretely_ sexually dimorphic the way genitals are, but as [Marco del Giudice _et al._ point out](http://cogprints.org/10046/1/Delgiudice_etal_critique_joel_2015.pdf), the "cannot be categorized into two distinct classes" claim seems false in an important sense. The lack of "internal consistency" in Joel _et al._'s sense is exactly the behavior we expect from multivariate normal-ish distributions with different-but-not-vastly-different means. (There aren't going to be many traits where the sexes are like, _four_ or whatever standard deviations apart.) It's just like how sequences of flips of a Heads-biased and Tails-biased coin are going to be unique "mosaics" of Heads and Tails, but pretty distinguishable with enough flips—and indeed, with the right stats methodology, [MRI brain scans can predict sex at 96.8% accuracy](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6374327/).
+
+Sex differences in the brain are like sex differences in the skeleton: anthropologists can tell female and male skeletons apart (the [pelvis is shaped differently](https://johnhawks.net/explainer/laboratory/sexual-dimorphism-pelvis), for obvious reasons), and [machine-learning models can see very reliable differences that human radiologists can't](/papers/yune_et_al-beyond_human_perception_sexual_dimorphism_in_hand_and_wrist_radiographs.pdf), but neither sex has entire _bones_ that the other doesn't, and the same is true of brain regions. (The evopsych story about complex adaptations being universal-up-to-sex suggests that sex-specific bones or brain regions should be _possible_, but in a bit of _relative_ good news for antisexism, apparently evolution didn't need to go that far. Um, in humans—a lot of other mammals actually have [a penis bone](https://en.wikipedia.org/wiki/Baculum).)
+
+Maybe this should just look like supplementary Statistics Details brushed over some basic facts of human existence that everyone knows? I'm a pretty weird guy, in more ways than one. I am not prototypically masculine. Most men are not like me. If I'm allowed to cherry-pick what measurements to take, I can name ways in which my mosaic is more female-typical than male-typical. (For example, I'm _sure_ I'm above the female mean in [Big Five Neuroticism](https://en.wikipedia.org/wiki/Big_Five_personality_traits).) ["[A] weakly negative correlation can be mistaken for a strong positive one with a bit of selective memory."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences)
+
+But "weird" represents a much larger space of possibilities than "normal", much as [_nonapples_ are a less cohesive category than _apples_](https://www.lesswrong.com/posts/2mLZiWxWKZyaRgcn7/selling-nonapples): a woman trapped in a man's body would be weird, but it doesn't follow that weird men are secretly women, as opposed to some other, _specific_, kind of weird. If you _sum over_ all of my traits, everything that makes me, _me_—it's going to be a point in the _male_ region of the existing, unremediated, genderspace. In the course of _being myself_, I'm going to do more male-typical things than female-typical things, not because I'm _trying_ to be masculine (I'm not), and not because I "identify as" male (I don't—or I wouldn't, if someone could give me a straight answer as to what this "identifying as" operation is supposed to consist of), but because I literally in-fact am male in the same sense that male chimpanzees or male mice are male, whether or not I like it (I don't—or I wouldn't, if I still believed that preference was coherent), and whether or not I _notice_ all the little details that implies (I almost certainly don't).
+
+Okay, maybe I'm _not_ completely over my teenage religion of psychological sex differences denialism?—that belief still feels uncomfortable to put my weight on. I would _prefer_ to believe that there are women who are relevantly "like me" with respect to some fair (not gerrymandered) metric on personspace. But, um ... it's not completely obvious whether I actually know any? (Well, maybe two or three.) When I look around me—most of the people in my robot cult (and much more so if you look at the core of old-timers from the _Overcoming Bias_ days, rather than the greater "community" of today) are male. Most of the people in my open-source programming scene are male. These days, [most of the _women_](/2020/Nov/survey-data-on-cis-and-trans-women-among-haskell-programmers/) in [my open-source programming scene](/2017/Aug/interlude-vii/) are male. Am ... am I not supposed to _notice_?
+
+Is _everyone else_ not supposed to notice? Suppose I got the magical body transformation (with no brain mods beyond the minimum needed for motor control). Suppose I caught the worshipful attention of a young man just like I used to be ("a" young man, as if there wouldn't be _dozens_), who privately told me, "I've never met a woman quite like you." What would I be supposed to tell him? ["There's a _reason_ for that"](https://www.dumbingofage.com/2014/comic/book-5/01-when-somebody-loved-me/purpleandskates/)?
+
+In the comments to [a post about how gender is built on innate sex differences](https://web.archive.org/web/20130216025508/http://lesswrong.com/lw/rp/the_opposite_sex/) (of which I can only link to the Internet Archive copy, the original having been quietly deleted sometime in 2013—I wonder why!), Yudkowsky opined that "until men start thinking of themselves _as men_ they will tend to regard women as defective humans."
+
+From context, it seems like the idea was targeted at men who disdain women as a mysterious Other—but the same moral applies to men who are in ideologically-motivated denial about how male-typical they are, and whether this has implications. [At the time, I certainly didn't want to think of myself _as a man_.](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way#comment-7ZwECTPFTLBpytj7b) And yet ...
+
+For example. When I read things from the [systematizing–empathizing](https://en.wikipedia.org/wiki/Empathising%E2%80%93systemising_theory)/"men are interested in things, women are interested in people" line of research—which, to be clear that you know that I know, is [only a mere statistical difference at a mere Cohen's _d_ ≈ 0.93](http://unremediatedgender.space/papers/su_et_al-men_and_things_women_and_people.pdf), not an absolute like genitals or chromosomes—my instinctive reaction is, "But, but, that's not _fair_. People _are_ systems, because _everything_ is a system. [What kind of a lame power is empathy, anyway?](https://tvtropes.org/pmwiki/pmwiki.php/Main/WhatKindOfLamePowerIsHeartAnyway)"
+
+[But the map is not the territory](https://www.lesswrong.com/posts/np3tP49caG4uFLRbS/the-quotation-is-not-the-referent). We don't have unmediated access to reality beyond [the Veil of Maya](https://web.archive.org/web/20020606121040/http://singinst.org/GISAI/mind/consensus.html); system-ness in the empathizing/systematizing sense is a feature of our _models_ of the world, not the world itself.
+
+So what "Everything is a system" _means_ is, "I _think_ everything is a system."
+
+I think everything is a system ... because I'm male??
+
+(Or whatever the appropriate generalization of "because" is for statistical group differences. The sentence "I'm 5′11″ because I'm male" doesn't seem quite right, but it's pointing to something real.)
+
+I could _assert_ that it's all down to socialization and stereotyping and self-fulfilling prophecies—and I know that _some_ of it is. (Self-fulfilling prophecies [are coordination equilibria](/2020/Jan/book-review-the-origins-of-unfairness/).) But I still want to speculate that the nature of my X factor—the things about my personality that let me write the _specific_ things I do even though I'm [objectively not that smart](/images/wisc-iii_result.jpg) compared to some of my robot-cult friends—is a pattern of mental illness that could realistically only occur in males. (Yudkowsky: ["It seems to me that male teenagers especially have something like a _higher cognitive temperature_, an ability to wander into strange places both good and bad."](https://www.lesswrong.com/posts/xsyG7PkMekHud2DMK/of-gender-and-rationality))
+
+Of course there are women with an analogous story to tell about the nature of their own uniqueness—analogous along _some_ dimensions, if not others—but those aren't _my_ story to tell.