+The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything Just Work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned—
+
+[TODO: include more blockquote here]
+
+> Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.
+
+But, well ... I mean, um ...
+
+(I still really don't want to be blogging about this, but _somebody has to and no one else will_)
+
+From the standpoint of my secret erotic fantasy, "normal, masculine man wearing a female body like a suit of clothing" is actually a _great_ outcome—the _ideal_ outcome. Let me explain.
+
+The main plot of my secret erotic fantasy accomodates many frame stories, but I tend to prefer those that invoke the [literary genre of science](https://www.lesswrong.com/posts/4Bwr6s9dofvqPWakn/science-as-attire), and posit "technology" rather than "spells" or "potions" as the agent of change, even if it's all ultimately magic (where ["magic" is anything you don't understand](https://www.lesswrong.com/posts/kpRSCH7ALLcb6ucWM/say-not-complexity)).
+
+So imagine having something like [the transporter in _Star Trek_](https://memory-alpha.fandom.com/wiki/Transporter), but you re-materialize with the body of someone else, rather than your original body—a little booth I could walk in, dissolve in a tingly glowy special effect for a few seconds, and walk out looking like (say) [Nana Visitor (circa 1998)](https://memory-alpha.fandom.com/wiki/Kay_Eaton?file=Kay_Eaton.jpg). (In the folklore of [female-transformation erotica](/2016/Oct/exactly-what-it-says-on-the-tin/), this machine is often called the ["morphic adaptation unit"](https://www.cyoc.net/interactives/chapter_115321.html).)
+
+This high-level description of a hypothetical fantasy technology leaves some details unspecified—not just the _how_, but the _what_. What would the indistinguishable-from-magical transformation booth do to my brain? [As a preference-revealing thought experiment](https://www.lesswrong.com/posts/DdEKcS6JcW7ordZqQ/not-taking-over-the-world), what would I _want_ it to do, if I can't change [the basic nature of reality](https://www.lesswrong.com/posts/tPqQdLCuxanjhoaNs/reductionism), but if engineering practicalities weren't a constraint? (That is, I'm allowed to posit any atom-configuration without having to worry about how you would get all the atoms in the right place, but I'm not allowed to posit tethering my immortal soul to a new body, because [souls](https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural) [aren't](https://www.lesswrong.com/posts/7Au7kvRAPREm3ADcK/psychic-powers) [real](https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies).)
+
+The anti-plug-and-play argument makes me confident that it would have to change _something_ about my mind in order to integrate it with a new female body—if nothing else, my unmodified brain doesn't physically _fit_ inside Nana Visitor's skull. ([One meta-analysis puts the sex difference in intracranial volume and brain volume at](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3969295/) a gaping [Cohen's _d_](/2019/Sep/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences/) ≈ 3.0 and 2.1, respectively, and Visitor doesn't look like she has an unusually large head.)
+
+Fine—we're assuming that difficulty away and stipulating that the magical transformation booth can make the _minimal_ changes necessary to put my brain in a female body, and have it fit, and have all the motor-connection/body-mapping stuff line up so that I can move and talk normally in a body that feels like mine.
+
+I want this more than I can say. But is that _all_ I want? What about all the _other_ sex differences in the brain? Male brains are more lateralized—doing [relatively more communication within hemispheres rather than between](https://www.pnas.org/content/111/2/823); there are language tasks that women and men perform equally well on, but [men's brains use only the _left_ inferior frontal gyrus, whereas women's use both](/papers/shaywitz-et_al-sex_differences_in_the_functional_organization_of_the_brain_for_language.pdf). Women have a relatively thicker corpus callosum; men have a relatively larger amygdala. Fetal testosterone levels [increase the amount of gray matter in posterior lateral orbitofrontal cortex, but decrease the gray matter in Wernicke's area](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3306238/) ...
+
+Do I want the magical transformation technology to fix all that, too?
+
+Do I have _any idea_ what it would even _mean_ to fix all that, without spending multiple lifetimes studying neuroscience?
+
+I think I have just enough language to _start_ to talk about what it would mean. Since sex isn't an atomic attribute, but rather a high-level statistical regularity such that almost everyone can be cleanly classified as "female" or "male" _in terms of_ lower-level traits (genitals, hormone levels, _&c._), then, abstractly, we're trying to take points from male distribution and map them onto the female distribution in a way that preserves as much structure (personal identity) as possible. My female analogue doesn't have a penis (because then she wouldn't be female), but she is going to speak American English like me and be [85% Ashkenazi like me](/images/ancestry_report.png), because language and autosomal genes don't have anything to do with sex.
+
+The hard part has to do with traits that are meaningfully sexually dimorphic, but not as a discrete dichotomy—where the sex-specific universal designs differ in ways that are _subtler_ than the presence or absence of entire reproductive organs. (Yes, I know about [homology](https://en.wikipedia.org/wiki/Homology_(biology))—and _you_ know what I meant.) We are _not_ satisfied if the magical transformation technology swaps out my penis and testicles for a functioning female reproductive system without changing the rest of my body, because we want the end result to be indistinguishable from having been drawn from the female distribution (at least, indistinguishable _modulo_ having my memories of life as a male before the magical transformation), and a man-who-somehow-magically-has-a-vagina doesn't qualify.
+
+The "obvious" way to to do the mapping is to keep the same percentile rank within each trait, but take it with respect to the target sex's distribution. I'm 5′11″ tall, which [puts me at](https://dqydj.com/height-percentile-calculator-for-men-and-women/) the 73rd percentile for American men, about 6/10ths of a standard deviation above the mean. So _presumably_ we want to say that my female analogue is at the 73rd percentile for American women, about 5′5½″.
+
+You might think this is "unfair": some women—about 7 per 1000—are 5′11″, and we don't want to say they're somehow _less female_ on that account, so why can't I keep my height? The problem is that if we refuse to adjust for every trait for which the female and male distributions overlap (on the grounds that _some_ women have the same trait value as my male self), we don't end up with a result from the female distribution.
+
+The typical point in a high-dimensional distribution is _not_ typical along each dimension individually. [In 100 flips of a biased coin](http://zackmdavis.net/blog/2019/05/the-typical-set/) that lands Heads 0.6 of the time, the _single_ most likely sequence is 100 Heads, but there's only one of those and you're _vanishingly_ unlikely to actually see it. The sequences you'll actually observe will have close to 60 Heads. Each such sequence is individually less probable than the all-Heads sequence, but there are vastly more of them. Similarly, [most of the probability-mass of a high-dimensional multivariate normal distribution is concentrated in a thin "shell" some distance away from the mode](https://www.johndcook.com/blog/2011/09/01/multivariate-normal-shell/), for the same reason. (The _same_ reason: the binomial distribution converges to the normal in the limit of large _n_.)
+
+Statistical sex differences are like flipping two different collections of coins with different biases, where the coins represent various traits. Learning the outcome of any individual flip, doesn't tell you which which set the coin came from, but [if we look at the aggregation of many flips, we can get _godlike_ confidence](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy-1) as to which collection we're looking at.
+
+A single-variable measurement like height is like a single coin: unless the coin is _very_ biased, one flip can't tell you much about the bias. But there are lots of things about people for which it's not that they can't be measured, but that the measurements require _more than one number_—which correspondingly offer more information about the distribution generating them.
+
+[TODO (somewhere around-ish this section): chromosomes at the root of the causal graph: https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models ]
+
+Take faces. People are [verifiably very good at recognizing sex from (hair covered, males clean-shaven) photographs of people's faces](/papers/bruce_et_al-sex_discrimination_how_do_we_tell.pdf) (96% accuracy, which is the equivalent of _d_ ≈ 3.5), but we don't have direct introspective access into what _specific_ features our brains are using to do it; we just look, and _somehow_ know. The differences are real, but it's not a matter of any single, simple measurement you could perform with a ruler (like the distance between someone's eyes). Rather, it's a high-dimensional _pattern_ in many measurements you could take with a ruler, no one of which is definitive. [Covering up the nose makes people slower and slightly worse at sexing faces, but people don't do better than chance at guessing sex from photos of noses alone](/papers/roberts-bruce-feature_saliency_in_judging_the_sex_and_familiarity_of_faces.pdf).
+
+Notably, for _images_ of faces, we actually _do_ have transformation technology! (Not "magical", because we know how it works.) AI techniques like [generative adversarial networks](https://arxiv.org/abs/1812.04948) and [autoencoders](https://towardsdatascience.com/generating-images-with-autoencoders-77fd3a8dd368) can learn the structure of the distribution of facial photographs, and use that knowledge to synthesize faces from scratch (as demonstrated by [_thispersondoesnotexist.com_](https://thispersondoesnotexist.com/))—or [do things like](https://arxiv.org/abs/1907.10786) sex transformation (as demonstrated by [FaceApp](https://www.faceapp.com/), the _uniquely best piece of software in the world_).
+
+If you let each pixel vary independently, the space of possible 1024x1024 images is 1,048,576-dimensional, but the vast hypermajority of those images aren't photorealistic human faces. Letting each pixel vary independently is the wrong way to think about it: changing the lighting or pose changes a lot of pixels in what humans would regard as images of "the same" face. So instead, our machine-learning algorithms learn a [compressed](https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len) representation of what makes the tiny subspace (relative to images-in-general) of _faces in particular_ similar to each other. That [latent space](https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d) is a lot smaller (say, 512 dimensions), but still rich enough to embed the distinctions that humans notice: [you can find a hyperplane that separates](https://youtu.be/dCKbRCUyop8?t=1433) smiling from non-smiling faces, or glasses from no-glasses, or young from old, or different races—or female and male. Sliding along the [normal vector](https://en.wikipedia.org/wiki/Normal_(geometry)) to that [hyperplane](https://en.wikipedia.org/wiki/Hyperplane) gives the desired transformation: producing images that are "more female" (as the model has learned that concept) while keeping "everything else" the same.
+
+Two-dimensional _images_ of people are _vastly_ simpler than the actual people themselves in the real physical universe. But _in theory_, a lot of the same _mathematical principles_ would apply to hypothetical future nanotechnology-wielding AI systems that could, like the AI in "Failed Utopia #4-2", synthesize a human being from scratch (this-person-_didn't_-exist-dot-com?), or do a real-world sex transformation (PersonApp?)—and the same statistical morals apply to reasoning about sex differences in psychology and (which is to say) the brain.
+
+Daphna Joel _et al._ [argue](https://www.pnas.org/content/112/50/15468) [that](https://www.pnas.org/content/112/50/15468) human brains are "unique 'mosaics' of features" that cannot be categorized into distinct _female_ and _male_ classes, because it's rare for brains to be "internally consistent"—female-typical or male-typical along _every_ dimension. It's true and important that brains aren't _discretely_ sexually dimorphic the way genitals are, but as [Marco del Guidice _et al._ point out](http://cogprints.org/10046/1/Delgiudice_etal_critique_joel_2015.pdf), the "cannot be categorized into two distinct classes" claim seems false in an important sense. The lack of "internal consistency" in Joel _et al._'s sense is exactly the behavior we expect from multivariate normal-ish distributions with different-but-not-vastly-different means. (There aren't going to be many traits where the sexes are like, _four_ or whatever standard deviations apart.) It's just like how sequences of flips of a very Heads-biased and very Tails-biased coin are going to be unique "mosaics" of Heads and Tails, but pretty distinguishable with enough flips—and indeed, with the right stats methodology, [MRI scans can predict sex at 96.8% accuracy](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6374327/).
+
+Sex differences in the brain are like sex differences in the skeleton: anthropologists can tell female and male skeletons apart (the [pelvis is shaped differently](https://johnhawks.net/explainer/laboratory/sexual-dimorphism-pelvis), for obvious reasons), and [machine-learning models can see very reliable differences that human radiologists can't](/papers/yune_et_al-beyond_human_perception_sexual_dimorphism_in_hand_and_wrist_radiographs.pdf), but neither sex has entire _bones_ that the other doesn't, and the same is true of brain regions. (The evopsych story about complex adaptations being universal-up-to-sex suggests that sex-specific bones or brain regions should be _possible_, but apprently evolution didn't need to go that far. Good news for antisexism!—relatively speaking.)
+
+Maybe this should just look like supplementary Statistics Details brushed over some basic facts of human existence that everyone knows? I'm a pretty weird guy, in more ways than one. I am not prototypically masculine. Most men are not like me. If I'm allowed to cherry-pick what measurements to take, I can name ways in which my mosaic is more female-typical than male-typical. (For example, I'm _sure_ I'm above the female mean in [Big Five Neuroticism](https://en.wikipedia.org/wiki/Big_Five_personality_traits).) ["[A] weakly negative correlation can be mistaken for a strong positive one with a bit of selective memory."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) But "weird" represents a much larger space of possibilities than "normal", much as [_nonapples_ are a less cohesive category than _apples_](https://www.lesswrong.com/posts/2mLZiWxWKZyaRgcn7/selling-nonapples). If you _sum over_ all of my traits, everything that makes me, _me_—it's going to be a point in the _male_ region of the existing, unremediated, genderspace.
+
+Okay, maybe I'm _not_ completely over my teenage religion of psychological sex differences denialism?—that belief still feels uncomfortable to put my weight on. I want to believe that there are women who are relevantly "like me" with respect to some fair (not gerrymandered) metric on personspace. But, um ... it's not completely obvious whether I actually know any? Most of the people in my robot cult (and much more so if you look the core of old-timers from the _Overcoming Bias_ days, rather than the greater Berkeley "community" of today) are male. Most of the people in my open-source programming scene are male. These days, [most of the _women_ in my open-source programming scene are male.](/2017/Aug/interlude-vii/) Am I not supposed to _notice_? I could _assert_ that it's all down to socialization and self-fulfilling prophecies—and I know that _some_ of it is. (Self-fulfilling prophecies [are coordination equilibria](/2020/Jan/book-review-the-origins-of-unfairness/).) But I can't assert _with a straight face_ that all the gaps will vanish after the revolution, because _I've read the literature_ and can tell you several observations about chimps and [congenital adrenal hyperplasia](/images/cah_diffs_table.png) that make that seem _unlikely_.
+
+I want to speculate that the nature of my X factor—the things about my personality that let me write the things I do even though I'm [objectively not that smart](/images/wisc-iii_result.jpg) compared to some of my robot-cult friends—is a pattern of mental illness that could realistically only occur in males.
+
+I was once told by a very smart friend (who, unlike me, is not a religious fantatic), "Boys like games with challenges and points; girls like games with characters and stories."
+
+I said, "I like characters and stories! I think."
+
+He said, "I know, but at the margin, you seem suboptimally far in the challenges and points direction. But that's fine; that's what women are for."
+
+And what evidence could I point to, to show him that he's _bad and wrong_ for saying that, if he's not already religiously required to believe it?