Title: Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems Date: 2021-02-15 11:00 Category: commentary Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, Star Trek, Julia Serano Status: draft > _I'll write my way out > Write everything down, far as I can see > I'll write my way out > Overwhelm them with honesty > This is the eye of the hurricane > This is the only way I can protect my legacy_ > > —"Hurricane", _Hamilton_ So, as I sometimes allude to, I've spent basically my entire adult life in this insular intellectual subculture that was founded in the late 'aughts to promulgate an ideal of _systematically correct reasoning_—general methods of thought that result in true beliefs and successful plans—and, [incidentally](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes), to use these methods of systematically correct reasoning to prevent superintelligent machines from [destroying all value in the universe](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile). Lately I've been calling it my "robot cult" (a phrase [due to Dale Carrico](https://amormundi.blogspot.com/2011/08/ten-reasons-to-take-seriously.html))—the pejorative is partially [ironically affectionate](https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness), and partially an expression of betrayal-trauma acquired from that time almost everyone I [used to trust](https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science) insisted on, on ... Well. That's a _long story_. To _start_, I want to explain how my robot cult's foundational texts had an enormous influence on my self-concept in relation to sex and gender. It all started in summer 2007 (I was nineteen years old), when I came across _Overcoming Bias_, a blog on the theme of how to achieve more accurate beliefs. (I don't remember exactly how I was referred, but I think it was likely to have been [a link from Megan McArdle](https://web.archive.org/web/20071129181942/http://www.janegalt.net/archives/009783.html), then writing as "Jane Galt" at _Asymmetrical Information_.) [Although](http://www.overcomingbias.com/author/hal-finney) [technically](http://www.overcomingbias.com/author/james-miller) [a](http://www.overcomingbias.com/author/david-j-balan) [group](http://www.overcomingbias.com/author/andrew) [blog](http://www.overcomingbias.com/author/anders-sandberg), the vast majority of posts on _Overcoming Bias_ were by Robin Hanson or Eliezer Yudkowsky. I was previously acquainted in passing with Yudkowsky's [writing about future superintelligence](https://web.archive.org/web/20200217171258/https://yudkowsky.net/obsolete/tmol-faq.html). (I had [mentioned him in my Diary once in 2005](/ancillary/diary/42/), albeit without spelling his name correctly.) Yudkowsky was now using _Overcoming Bias_ and the medium of blogging [to generate material for a future book about rationality](https://www.lesswrong.com/posts/vHPrTLnhrgAHA96ko/why-i-m-blooking). Hanson's posts I could take or leave, but Yudkowsky's sequences of posts about rationality (coming out almost-daily through early 2009, eventually totaling hundreds of thousands of words) were _amazingly great_, [drawing on](https://www.lesswrong.com/posts/tSgcorrgBnrCH8nL3/don-t-revere-the-bearer-of-good-info) the [established knowledge of fields](https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainstream-academia) from [cognitive](https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity) [psychology](https://www.lesswrong.com/posts/R8cpqD3NA4rZxRdQ4/availability) to [evolutionary biology](https://www.lesswrong.com/s/MH2b8NfWv22dBtrs8) to explain the [mathematical](https://www.readthesequences.com/An-Intuitive-Explanation-Of-Bayess-Theorem) [principles](https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation) [governing](https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws) _how intelligence works_—[the reduction of "thought"](https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design) to [_cognitive algorithms_](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms). Intelligent systems [that use](https://arbital.greaterwrong.com/p/executable_philosophy) [evidence](https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence) to construct [predictive](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals. You simply [won't believe how much this blog](https://www.lesswrong.com/posts/DXcezGmnBcAYL2Y2u/yes-a-blog) will change your life; I would later frequently [joke](https://en.wiktionary.org/wiki/ha_ha_only_serious) that Yudkowsky rewrote my personality over the internet. (The blog posts later got edited and collected into a book, [_Rationality: From AI to Zombies_](https://www.amazon.com/Rationality-AI-Zombies-Eliezer-Yudkowsky-ebook/dp/B00ULP6EW2), but I continue to say "the Sequences" because I _hate_ the gimmicky "AI to Zombies" subtitle—it makes it sound like a commercial book optimized to sell copies, rather than something to corrupt the youth, competing for the same niche as the Bible or the Koran—_the book_ that explains what your life should be about.) There are a few things about me that I need to explain before I get into the topic-specific impact the blog had on me. The first thing—the chronologically first thing. Ever since I was thirteen or fourteen years old— (and I _really_ didn't expect to be blogging about this eighteen years later) (I _still_ don't want to be blogging about this, but unfortunately, it actually turns out to be central to the intellectual–political project I've been singlemindedly focused on for the past four years because [somebody has to and no one else will](https://unsongbook.com/chapter-6-till-we-have-built-jerusalem/)) —my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this. In the meantime, just so you know what I'm talking about, the relevant TVTrope is ["Man, I Feel Like a Woman."](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) Or search "body swap" on PornHub. Or check out my few, circumspect contributions to [the popular genre of](/2016/Oct/exactly-what-it-says-on-the-tin/) captioned-photo female transformation erotica (everyone is wearing clothes, so these might be "safe for work" in a narrow technical sense, if not a moral one): [1](/ancillary/captions/dr-equality-and-the-great-shift/) [2](/ancillary/captions/the-other-side-of-me/) [3](/ancillary/captions/the-impossible-box/) [4](/ancillary/captions/de-gustibus-non-est/). (The first segment of my pen surname is a legacy of middle-school friends letting me borrow some of the [Ranma ½](https://en.wikipedia.org/wiki/Ranma_%C2%BD) graphic novels, about a young man named Ranma Saotome cursed ("cursed"??) to transform into a woman on exposure to cold water. This was just _before_ puberty kicked in for me, but I have no way of computing the counterfactual to know whether that had a causal influence.) So, there was that erotic thing, which I was pretty ashamed of at the time, and _of course_ knew that I must never, ever tell a single soul about. (It would have been about three years since the fantasy started that I even worked up the bravery to [tell my Diary about it](/ancillary/diary/53/#first-agp-confession).) But within a couple years, I also developed this beautiful pure sacred self-identity thing that would persist indefinitely, where I started having a lot of _non_-sexual thoughts about being female. Just—little day-to-day thoughts, little symbolic gestures. Like when I would [write in my pocket notebook in the persona of my female analogue](/images/crossdreaming_notebook_samples.png). Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _y_, _z_)](/images/handwritten_phrase_jazzy_puppy.jpg) because I thought it made my handwriting look more feminine. Or the time when track and field practice split up into boys and girls, and I ironically muttered under my breath, "Why did I even join this team?—boys, I mean." Or when it was time to order sheets to fit on the dorm beds at the University in Santa Cruz, and I deliberately picked out the pink-with-flowers design on principle. Or how I was proud to be the kind of guy who bought Julia Serano's _Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity_ when it was new in 2007, and [who would rather read from Evelyn Fox Keller's _Reflections on Gender and Science_ than](http://zackmdavis.net/blog/2013/03/tradition/) watch [Super Bowl XLII](https://en.wikipedia.org/wiki/Super_Bowl_XLII). Or how, at University, I tried to go by my [first-and-middle-initials](https://en.wikipedia.org/wiki/List_of_literary_initials) because I wanted a gender-neutral [byline](https://en.wikipedia.org/wiki/Byline), and I wanted what people called me in real life to be the same as my byline—even if, obviously, I didn't expect people to not-notice which sex I am in real life because _that would be crazy_. (This attempted nickname change actually turned out to be a terrible idea that ended up causing me a huge amount of pointless identity-crisis psychological pain—my particular pair of real-life initials never really "felt like a name" even to me (as contrasted to something like "C.J.", which feels like a name because it has a _J_ in it); I turned out to be incredibly uncomfortable with different people knowing me by different names, and didn't have the guts to nag everyone in my life to switch for something that didn't feel like a name even to me; _and_ the "gender-neutral byline" rationale almost certainly never held up in practice because my real-life first initial is a [high-Scrabble-score letter](https://en.wikipedia.org/wiki/Scrabble_letter_distributions#English) that begins one popular boy name and zero popular girl names. But it was the _principle!_) Or how I stopped getting haircuts and grew my beautiful–beautiful ponytail. (This turned out to be a great idea and I wish I had thought of it sooner.) Or how one of the [little song-fragments I used to write in my head](/tag/music/) went— > _Sometimes I sigh because I'll never get rich > And there's no magic so I can't be a witch > And that I must enjoy the scorn of the world > Just 'cause I'm butch and I'm a tranny girl_ Or the time I felt proud when my Normal American Girl coworker at the supermarket in 'aught-nine said that she had assumed I was gay. (I'm not, but the fact that Normal American Girl thought so meant that I was successfully unmasculine.) And so on _et cetera ad infinitum_. This has been a very persistent _thing_ for me. The beautiful pure sacred self-identity thing doesn't _feel_ explicitly erotic. The thing I did in the day in class about writing in my notebook about being a girl, was _very different_ from the thing I did in my room at night about _visualizing_ girls with this abstract sense of "But what if that were _me_?" while furiously masturbating. The former activity was my beautiful pure happy romantic daydream, whereas the latter activity was not beautiful or pure at all! Now I am not a cognitive scientist, and can't claim to _know_ exactly what my beautiful pure sacred self-identity thing is, or where it comes from—that's [not the kind of thing I would expect people to _know_ from introspection alone](/2016/Sep/psychology-is-about-invalidating-peoples-identities/). But it has always seemed like a pretty obvious guess that there must have been _some sort of causal relationship_ between the erotic thing, and the beautiful pure sacred self-identity thing, even if the two things don't _feel_ the same: the overlap in subject matter is too much to be a coincidence. And the erotic thing definitely came _first_. Maybe this story reads differently in 2021 from how it was to live in the 'aughts? I think that teenage boys in the current year having the kind of feelings I was having then, upon referencing or hinting at the beautiful pure sacred self-identity thing— (and the beautiful pure sacred self-identity thing is _much_ easier to talk about than the erotic thing) (I mean, the beautiful pure sacred self-identity thing is much harder to talk about _clearly_, but talking about it _un_-clearly is less shameful and requires much less bravery) —are immediately provided with "Oh, that means you're not a cis boy; you're a trans girl" as the definitive explanation. But it was a different time, then. Of course I had _heard of_ transsexualism as a thing, in the form of the "woman trapped in a man's body" trope, but it wasn't something I expected to actually encounter in real life. (I understood my "tranny girl" song to reflect an idle fantasy, not a legitimate life plan.) At the time, I had _no reason to invent the hypothesis_ that I might somehow literally be a woman in some unspecified psychological sense. I knew I was a boy _because_ boys are the ones with penises. That's what the word _means_. I was a boy who had a weird _sex fantasy_ about being a girl. That was just the obvious ordinary straightforward plain-language description of the situation. It _never occured to me_ to couch it in the language of "dysphoria", or actually possessing some innate "gender". The beautiful pure sacred self-identity thing was about identifying _with_ women, not identifying _as_ a woman—[roughly analogous to how](/2017/Jul/interlude-vi/) a cat lover might be said to "identify with" cats, without claiming to somehow _be_ a cat, because _that would be crazy_. [It was while browsing _Wikipedia_ in 2006 that I encountered the obvious and perfect word for my thing](/2017/Feb/a-beacon-through-the-darkness-or-getting-it-right-the-first-time/)—_autogynephilia_, from the Greek for "[love of](https://en.wiktionary.org/wiki/-philia) [oneself as](https://en.wiktionary.org/wiki/auto-#English) [a woman](https://en.wiktionary.org/wiki/gyno-)." I was actually surprised that it turned out to have been coined in the context of a theory (by clinical sexual psychologist Ray Blanchard) that it was the root cause of one of two types of male-to-female transsexualism. You see, a very important feature of my gender-related thinking at the time was that I was growing very passionate about—well, in retrospect I call it _psychological-sex-differences denialism_, but at the time I called it _antisexism_. Where sometimes people in the culture would make claims about how women and men are psychologically different, and of course I knew this was _bad and wrong_. Therefore the very idea of transsexualism was somewhat suspect insofar as it necessarily depends on the idea that women and men are psychologically different (in order for it to be possible to be in the "wrong" body). So while I was certainly glad to learn that _there's a word for it_, an obvious and perfect word for _my thing_, I mostly just stole the word (whose referent and meaning I thought was self-explanatory from the common Greek roots) without paying any further attention to this Blanchard theory or the idea that _I_ might somehow be transgender. So, you know, as part of my antisexism, I read a lot about feminism. I remember checking out [_The Feminine Mystique_](https://en.wikipedia.org/wiki/The_Feminine_Mystique) and [Susan Faludi's _Backlash_](https://en.wikipedia.org/wiki/Backlash:_The_Undeclared_War_Against_American_Women) from the school library. Before I found my internet-home on _Overcoming Bias_, I would read the big feminist blogs—[_Pandagon_](https://web.archive.org/web/20070630211101/http://pandagon.net/), [_Feministe_](https://web.archive.org/web/20080901002058/http://www.feministe.us/blog), [_Feministing_](https://web.archive.org/web/20080605182529/http://www.feministing.com/). The one time I special-ordered a book at the physical Barnes & Noble before I turned 18 and got my own credit card and could order books online, it was [_Feminist Intepretations of Ayn Rand_](https://www.psupress.org/books/titles/0-271-01830-5.html). (In retrospect, it's notable how _intellectualized_ all of this was—my pro-feminism was an ideological matter between me and my books, rather than arising from any practical need. It's not like I had disproportionately female friends or whatever—I mean, to the extent that I had any friends and not just books.) It also seems like a pretty obvious guess that there must have been _some sort of causal relationship_ between my antisexism and the erotic and beautiful-pure-sacred-self-identity things. True, the [blank slate doctrine](/2020/Apr/book-review-human-diversity/#blank-slate) has been ideologically fashionable my entire life. In the sense that progressivism has been [likened to a nontheistic state religion](https://www.unqualified-reservations.org/2007/09/how-dawkins-got-pwned-part-1/)—uh, bear with me for a moment—I was a _very_ religious teenager. I have a vague memory of being in the Crown College library at the University in Santa Cruz in 2007, reading Robert Wright's _The Moral Animal_ (because it had been on [Yudkowsky's old book-recommendations list](https://web.archive.org/web/20200118114912/https://yudkowsky.net/obsolete/bookshelf.html)), and being _aghast_ at how openly, brazenly _sexist_ it was. (That is, with respect to what I considered _sexist_ at the time. I wish there was some way to know what my teenage self would think of my current self's writing, which is at least as "bad" as Wright and plausibly worse. Maybe if the whole benevolent-superintelligence thing my robot cult always talks about ever works out, I'll be able to kick off a limited-scope [ancestor-simulation](https://www.simulation-argument.com/simulation.html) to find out. In the meantime, if you're offended, I'd love it if you could let me know in the comments exactly how much and why! [Personal identity doesn't actually exist](https://www.lesswrong.com/posts/RLScTpwc5W2gGGrL9/identity-isn-t-in-specific-atoms); humans growing up in the same cultural tradition can be seen as being drawn from a similar _distribution_ as my teenage self.) That overwhelming feeling of cold horror and hatred at _the enemy revealed_—that, I conjecture, is what religious people feel when encountering a heretical text for the first time. (In _principle_, a sufficiently advanced neuroscience would be able to confirm that it is the same emotion, as a matter of biological fact.) The social–psychological need to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points) is why the "religion" characterization makes sense, even if the claim that psychological sex differences are fake isn't a [_supernatural_](https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural) one. But quasi-religious ideological fervor aside, there was presumably a _reason_ I cared so much about being a good pro-feminist _specifically_, and hardly spent any time at all thinking about other dimensions of social justice, like race or class. And I think the reason is because, because ... Well. The reason I'm blogging this story at all is because I'm scared that in order to finish that sentence in the current year and be understood, I'd have to say, "because I was trans." And with respect to what the words mean in the current year, it's true. But that's not how I think of it, then or now. It's because I was _straight_. Because I loved women, and wanted to do right by them. It's an _identificatory_ kind of love—loving women as extension of the self, rather than a mysterious, unfathomable [Other](https://en.wikipedia.org/wiki/The_Second_Sex#Volume_One). But that's not unusual, is it?—or it _shouldn't_ be. I would have assumed that guys who can't relate to this are probably just sexist. ------ Anyway, that's some background about where I was at, personally and ideologically, _before_ I fell in with this robot cult. My ideological committment to psychological-sex-differences denialism made me uncomfortable when the topic of sex differences happened to come up on the blog—which wasn't particularly often at all, but in such a _vast_ body of work as the Sequences, it did happen to come up a few times (and those few times are the subject of this blog post). For example, as part of [an early explanation of why the values we would want to program into an artificial superintelligence don't reduce to any one simple principle](https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions), Yudkowsky remarks that "the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other or from any other value." From the perspective of axiomatic antisexism that I held at the time, this assertion is cringe-inducing. Of course most people are straight, but is it not all the _same love_? I wasn't ready to hear it then, but—I mean, probably not? So, for the _most_ part, all humans are extremely similar: [as Yudkowsky would soon write about](https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind) [(following Leda Cosmides and John Tooby)](https://www.cep.ucsb.edu/papers/pfc92.pdf), complex functional adaptations have to be species-universal in order to not get scrambled during meiosis. As a toy example, if some organelle gets assembled from ten genes, those ten alleles _all_ have to be nearly universal in the population—if each only had a frequency of 0.9, then the probability of getting them all right would only be 0.910 ≈ 0.349. If allele H [epistatically](https://en.wikipedia.org/wiki/Epistasis) only confers a fitness advantage when allele G at some other locus is already present, then G has to already be well on its way to fixation in order for there to be appreciable selective pressure for H. Evolution, feeding on variation, uses it up. Complicated functionality that requires multiple genes working in concert can only accrete gradually as each individual piece reaches fixation in the entire population, resulting in an intricate species-universal _design_: just about everyone has 206 bones, a liver, a [parietal lobe](https://en.wikipedia.org/wiki/Parietal_lobe), _&c_. In this way (contrary to the uninformed suspicions of those still faithful to the blank slate), evolutionary psychology actually turns out to be impressively antiracist discipline: maybe individual humans can differ in small ways like personality, or [ancestry-groups in small ways](/2020/Apr/book-review-human-diversity/#ancestries) like skin color, but these are, and _have_ to be, "shallow" low-complexity variations on the same basic human design; new _complex_ functionality would require speciation. This luck does not extend to antisexism. If the genome were a computer program, it would have `if female { /* ... */ } else if male { /* ... */ }` conditional blocks, and inside those blocks, you can have complex sex-specific functionality. By default, selection pressures on one sex tend to drag the other along for the ride—men have nipples because there's no particular reason for them not to—but in those cases where it was advantageous in the environment of evolutionary adaptedness for females and males to do things _differently_, sexual dimorphism can evolve (slowly—[more than one and half orders of magnitude slower than monomorphic adaptations](/papers/rogers-mukherjee-quantitative_genetics_of_sexual_dimorphism.pdf), in fact). The evolutionary theorist Robert Trivers wrote, "One can, in effect, treat the sexes as if they were different species, the opposite sex being a resource relevant to producing maximum surviving offspring" (!). There actually isn't one species-universal design—it's _two_ designs. If you're willing to admit to the possibility of psychological sex differences _at all_, you have to admit that sex differences in the parts of the mind that are _specifically about mating_ are going to be a prime candidate. (But by no means the only one—different means of reproduction have different implications for [life-history strategies](https://en.wikipedia.org/wiki/Life_history_theory) far beyond the act of mating itself.) Even if there's a lot of "shared code" in how love-and-attachment works in general, there are also going to be specific differences that were [optimized for](https://www.lesswrong.com/posts/8vpf46nLMDYPC6wA4/optimization-and-the-intelligence-explosion) facilitating males impregnating females. In that sense, the claim that "the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other" just seems commonsensically _true_. I guess if you _didn't_ grow up with a quasi-religious fervor for psychological sex differences denialism, this whole theoretical line of argument about evolutionary psychology doesn't seem world-shatteringly impactful?—maybe it just looks like supplementary Science Details brushed over some basic facts of human existence that everyone knows. But if you _have_ built your identity around [quasi-religious _denial_](/2020/Apr/peering-through-reverent-fingers/) of certain basic facts of human existence that everyone knows (if not everyone [knows that they know](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief)), getting forced out of it by sufficient weight of Science Details [can be a pretty rough experience](https://www.greaterwrong.com/posts/XM9SwdBGn8ATf8kq3/c/comment/Zv5mrMThBkkjDAqv9). My hair-trigger antisexism was sort of lurking in the background of some of my comments while the Sequences were being published (though, again, it wasn't relevant to _most_ posts, which were just about cool math and science stuff that had no avenue whatsoever for being corrupted by gender politics). The term "social justice warrior" wasn't yet popular, but I definitely had the SJW-alike mindset (nurtured from my time lurking the feminist blogosphere) of being preoccupied with the badness and wrongness of people who are wrong and bad (_i.e._, sexist), rather than trying to [minimize the expected squared error of my probabilistic predictions](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception). Another one of the little song-fragments I wrote in my head a few years earlier (which I mention for its being representative of my attitude at the time, rather than it being notable in itself), mentioned an advice columnist, [Amy Alkon](http://www.advicegoddess.com/), syndicated in the _Contra Costa Times_ of my youth, who would sometimes give dating advice based on a pop-evopsych account of psychological sex differences—the usual fare about women seeking commitment and men seeking youth and beauty. My song went— > _I hope Amy Alkon dies tonight > So she can't give her bad advice > No love or value save for evolutionary psych_ > > _I hope Amy Alkon dies tonight > Because the world's not girls and guys > Cave men and women fucking 'round the fire in the night_ Looking back with the outlook later acquired from my robot cult, this is abhorrent. You don't _casually wish death_ on someone just because you disagree with their theory of psychology! Even if it wasn't in a spirit of personal malice (this was a song I sung to myself, not an actual threat directed to Amy Alkon's inbox), the sentiment just _isn't done_. But at the time, I _didn't notice there was anything wrong with my song_. I hadn't yet been socialized into the refined ethos of "False ideas should be argued with, but heed that we too may have ideas that are false". [TODO: this denial was in the background in "The Opposite Sex" (https://web.archive.org/web/20130216025508/http://lesswrong.com/lw/rp/the_opposite_sex/), Yud on "men should think of themselves as men" / "I often wish some men/women would appreciate"] [TODO: Me pretending to be dumb about someone not pretending to be dumb about my initials https://www.overcomingbias.com/2008/04/inhuman-rationa.html ; contrast that incident (it's not an accident that he guessed right) to Yudkowsky: "I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong." (https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance)] [TODO Vassar slapping me down in June 2008 (on the same day that I crossdressed in front of Seanan and Katie!): https://www.overcomingbias.com/2008/06/why-do-psychopa.html#comment-518267438] [TODO (when I have internet privs again): this was the same day I crossdressed in front of S. and K., the first time I was xrossdressed in front of other people ever!!] Sex differences would come up a couple more times in one of the last Sequences, on "Fun Theory"—speculations on how life could be truly _good_ if the world were superintelligently optimized for human values, in contrast to the cruelty and tragedy of our precarious existence [in a world shaped only by blind evolutionary forces](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god). According to Yudkowsky, one of the ways in which people's thinking about artificial intelligence usually goes wrong is [anthropomorphism](https://www.lesswrong.com/posts/RcZeZt8cPk48xxiQ8/anthropomorphic-optimism)—expecting arbitrary AIs to behave like humans, when really "AI" corresponds to [a much larger space of algorithms](https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general). As a social animal, predicting other humans is one of the things we've evolved to be good at, and the way that works is probably via "empathic inference": [I predict your behavior by imagining what _I_ would do in your situation](https://www.lesswrong.com/posts/Zkzzjg3h7hW5Z36hK/humans-in-funny-suits). Since all humans are very similar, [this appeal-to-black-box](https://www.lesswrong.com/posts/9fpWoXpNv83BAHJdc/the-comedy-of-behaviorism) works pretty well in our lives (though it won't work on AI). And from this empathy, evolution also coughed up the [moral miracle](https://www.lesswrong.com/posts/pGvyqAQw6yqTjpKf4/the-gift-we-give-to-tomorrow) of [_sympathy_, intrinsically caring about what others feel](https://www.lesswrong.com/posts/NLMo5FZWFFq652MNe/sympathetic-minds). In ["Interpersonal Entanglement"](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), Yudkowsky appeals to the complex moral value of sympathy as an argument against the desireability of nonsentient sex partners (_catgirls_ being the technical term). Being emotionally intertwined with another actual person is one of the things that makes life valuable, that would be lost if people just had their needs met by soulless catgirl holodeck characters. But there's a problem, Yudkowsky argues: women and men aren't designed to make each other optimally happy. The abstract game between the two human life-history strategies in the environment of evolutionary adaptedness had a conflicting-interests as well as a shared-interests component, and human psychology still bears the design signature of that game denominated in inclusive fitness, even though [no one cares about inclusive fitness](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers). (Peter Watts: ["And God smiled, for Its commandment had put Sperm and Egg at war with each other, even unto the day they made themselves obsolete."](https://www.rifters.com/real/Blindsight.htm)) The secnario of Total Victory for the ♂ player in the conflicting-interests subgame is not [Nash](https://en.wikipedia.org/wiki/Nash_equilibrium). The design of the entity who _optimally_ satisfied what men want out of women would not be, and _could_ not be, within the design parameters of actual women. (And _vice versa_ and respectively, but in case you didn't notice, this blog post is all about male needs.) Yudkowsky dramatized the implications in a short story, ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2), portraying an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because of the mismatch in the sexes' desires, and because the AI is prohibited from editing people's minds, the happiness-maximizing solution (according to the story) turns out to be splitting up the human species by sex and giving women and men their own _separate_ utopias (on [Venus and Mars](https://en.wikipedia.org/wiki/Gender_symbol#Origins), ha ha), complete with artificially-synthesized romantic partners. Of course no one _wants_ that—our male protagonist doesn't _want_ to abandon his wife and daughter for some catgirl-adjacent (if conscious) hussy. But humans _do_ adapt to loss; if the separation were already accomplished by force, people would eventually move on, and post-separation life with companions superintelligently optimized _for you_ would ([_arguendo_](https://en.wikipedia.org/wiki/Arguendo)) be happier than life with your real friends and family, whose goals will sometimes come into conflict with yours because they weren't superintelligently designed _for you_. The alignment-theory morals are those of [unforseen maxima](https://arbital.greaterwrong.com/p/unforeseen_maximum) and [edge instantiation](https://arbital.greaterwrong.com/p/edge_instantiation). An AI designed to maximize happiness would kill all humans and tile the galaxy with maximally-efficient happiness-brainware. If this sounds "crazy" to you, that's the problem with anthropomorphism I was telling you about: [don't imagine "AI" as an emotionally-repressed human](https://www.lesswrong.com/posts/zrGzan92SxP27LWP9/points-of-departure), just think about [a machine that calculates what actions would result in what outcomes](https://web.archive.org/web/20071013171416/http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/), and does the action that would result in the outcome that maximizes some function. It turns out that picking a function that doesn't kill everyone looks hard. Just tacking on the constaints that you can think of (make the _existing_ humans happy without tampering with their minds) [will tend to produce similar "crazy" outcomes that you didn't think to exclude](https://arbital.greaterwrong.com/p/nearest_unblocked). At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at "Failed Utopia #4-2" in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back a dozen years later—[or even four years later](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/D34jhYBcaoE7DEb8d)—my performative horror was missing the point. _The argument makes sense_. Of course, it's important to notice that you'd need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI in the story doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other. A faithful antisexist (as I was) might insist that that should be the _only_ moral, as it implies the other [_a fortiori_](https://en.wikipedia.org/wiki/Argumentum_a_fortiori). But if you're trying to _learn about reality_ rather than protect your fixed quasi-religious beliefs, it should be _okay_ for one of the lessons to get a punchy sci-fi short story; it should be _okay_ to think about the hyperplane between two coarse clusters, even while it's simultaneously true that a set of hyperplanes would suffice to [shatter](https://en.wikipedia.org/wiki/Shattered_set) every individual point, without deigning to acknowledge the existence of clusters. On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_ (presumably after [the Norse deity](https://en.wikipedia.org/wiki/Ver%C3%B0andi)), rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but since the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story. Another post in this vein that had a huge impact on me was ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). As an illustration of how [the hope for radical human enhancement is fraught with](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard) technical difficulties, Yudkowsky sketches a picture of just how difficult an actual male-to-female sex change would be. It would be hard to overstate how much of an impact this post had on me. I've previously linked it on [this](/2016/Nov/reply-to-ozy-on-two-type-mtf-taxonomy/#changing-emotions-link) [blog](/2017/Jan/the-line-in-the-sand-or-my-slippery-slope-anchoring-action-plan/#changing-emotions-link) [five](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/#changing-emotions-link) [different](/2018/Dec/untitled-metablogging-26-december-2018/#changing-emotions-link) [times](/2019/Aug/the-social-construction-of-reality-and-the-sheer-goddamned-pointlessness-of-reason/#changing-emotions-link). In June 2008, half a year before it was published, I encountered the [2004 Extropians mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that the blog post had clearly been revised from. (The fact that I was trawling through old mailing list archives searching for Yudkowsky content that I hadn't already read, tells you something about what a fanboy I am—if, um, you hadn't already noticed.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?" The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything Just Work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned— > Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing. > > [...] > > So to actually _become female_ ... > > We're talking about a _massive_ transformation here, billions of neurons and trillions of synapses rearranged. Not just form, but content—just like a male judo expert would need skills repatterned to become a female judo expert, so too, you know how to operate a male brain but not a female brain. You are the equivalent of a judo expert at one, but not the other. You have _cognitive_ reflexes, and consciously learned cognitive skills as well. > > [...] > > What happens when, as a woman, you think back to your memory of looking at Angelina Jolie photos as a man? How do you _empathize_ with your _past self_ of the opposite sex? Do you flee in horror from the person you were? Are all your life's memories distant and alien things? How can you _remember_, when your memory is a recorded activation pattern for neural circuits that no longer exist in their old forms? Do we rewrite all your memories, too? But, well ... I mean, um ... (I still really don't want to be blogging about this, but _somebody has to and no one else will_) From the standpoint of my secret erotic fantasy, "normal, masculine man wearing a female body like a suit of clothing" is actually a _great_ outcome—the _ideal_ outcome. Let me explain. The main plot of my secret erotic fantasy accomodates many frame stories, but I tend to prefer those that invoke the [literary genre of science](https://www.lesswrong.com/posts/4Bwr6s9dofvqPWakn/science-as-attire), and posit "technology" rather than "spells" or "potions" as the agent of transformation, even if it's all ultimately magic (where ["magic" is a term of art for anything you don't understand how to implement as a computer program](https://www.lesswrong.com/posts/kpRSCH7ALLcb6ucWM/say-not-complexity)). So imagine having something like [the transporter in _Star Trek_](https://memory-alpha.fandom.com/wiki/Transporter), but you re-materialize with the body of someone else, rather than your original body—a little booth I could walk in, dissolve in a tingly glowy special effect for a few seconds, and walk out looking like (say) [Nana Visitor (circa 1998)](https://memory-alpha.fandom.com/wiki/Kay_Eaton?file=Kay_Eaton.jpg). (In the folklore of [female-transformation erotica](/2016/Oct/exactly-what-it-says-on-the-tin/), this machine is often called the ["morphic adaptation unit"](https://www.cyoc.net/interactives/chapter_115321.html).) As "Changing Emotions" points out, this high-level description of a hypothetical fantasy technology leaves many details unspecified—not just the _how_, but the _what_. What would the indistinguishable-from-magical transformation booth do to my brain? [As a preference-revealing thought experiment](https://www.lesswrong.com/posts/DdEKcS6JcW7ordZqQ/not-taking-over-the-world), what would I _want_ it to do, if I can't change [the basic nature of reality](https://www.lesswrong.com/posts/tPqQdLCuxanjhoaNs/reductionism), but if engineering practicalities weren't a constraint? (That is, I'm allowed to posit any atom-configuration without having to worry about how you would get all the atoms in the right place, but I'm not allowed to posit tethering my immortal soul to a new body, because [souls](https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural) [aren't](https://www.lesswrong.com/posts/7Au7kvRAPREm3ADcK/psychic-powers) [real](https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies).) The anti-plug-and-play argument makes me confident that it would have to change _something_ about my mind in order to integrate it with a new female body—if nothing else, my unmodified brain doesn't physically _fit_ inside Nana Visitor's skull. ([One meta-analysis puts the sex difference in intracranial volume and brain volume at](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3969295/) a gaping [Cohen's _d_](/2019/Sep/does-general-intelligence-deflate-standardized-effect-sizes-of-cognitive-sex-differences/) ≈ 3.0 and 2.1, respectively, and Visitor doesn't look like she has an unusually large head.) Fine—we're assuming that difficulty away and stipulating that the magical transformation booth can make the _minimal_ changes necessary to put my brain in a female body, and have it fit, and have all the motor-connection/body-mapping stuff line up so that I can move and talk normally in a body that feels like mine, without being paralyzed or needing months of physical therapy to re-learn how to walk. I want this more than I can say. But is that _all_ I want? What about all the _other_ sex differences in the brain? Male brains are more lateralized—doing [relatively more communication within hemispheres rather than between](https://www.pnas.org/content/111/2/823); there are language tasks that women and men perform equally well on, but [men's brains use only the _left_ inferior frontal gyrus, whereas women's use both](/papers/shaywitz-et_al-sex_differences_in_the_functional_organization_of_the_brain_for_language.pdf). Women have a relatively thicker corpus callosum; men have a relatively larger amygdala. Fetal testosterone levels [increase the amount of gray matter in posterior lateral orbitofrontal cortex, but decrease the gray matter in Wernicke's area](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3306238/) ... Do I want the magical transformation technology to fix all that, too? Do I have _any idea_ what it would even _mean_ to fix all that, without spending multiple lifetimes studying neuroscience? I think I have just enough language to _start_ to talk about what it would mean. Since sex isn't an atomic attribute, but rather a high-level statistical regularity such that almost everyone can be cleanly classified as "female" or "male" _in terms of_ lower-level traits (genitals, hormone levels, _&c._), then, abstractly, we're trying to take points from male distribution and map them onto the female distribution in a way that preserves as much structure (personal identity) as possible. My female analogue doesn't have a penis (because then she wouldn't be female), but she is going to speak American English like me and be [85% Ashkenazi like me](/images/ancestry_report.png), because language and autosomal genes don't have anything to do with sex. The hard part has to do with traits that are meaningfully sexually dimorphic, but not as a discrete dichotomy—where the sex-specific universal designs differ in ways that are _subtler_ than the presence or absence of entire reproductive organs. (Yes, I know about [homology](https://en.wikipedia.org/wiki/Homology_(biology))—and _you_ know what I meant.) We are _not_ satisfied if the magical transformation technology swaps out my penis and testicles for a functioning female reproductive system without changing the rest of my body, because we want the end result to be indistinguishable from having been drawn from the female distribution (at least, indistinguishable _modulo_ having my memories of life as a male before the magical transformation), and a man-who-somehow-magically-has-a-vagina doesn't qualify. The "obvious" way to to do the mapping is to keep the same percentile rank within each trait (given some suitably exhaustive parsing and factorization of the human design into individual "traits"), but take it with respect to the target sex's distribution. I'm 5′11″ tall, which [puts me at](https://dqydj.com/height-percentile-calculator-for-men-and-women/) the 73rd percentile for American men, about 6/10ths of a standard deviation above the mean. So _presumably_ we want to say that my female analogue is at the 73rd percentile for American women, about 5′5½″. You might think this is "unfair": some women—about 7 per 1000—are 5′11″, and we don't want to say they're somehow _less female_ on that account, so why can't I keep my height? The problem is that if we refuse to adjust for every trait for which the female and male distributions overlap (on the grounds that _some_ women have the same trait value as my male self), we don't end up with a result from the female distribution. The typical point in a high-dimensional distribution is _not_ typical along each dimension individually. [In 100 flips of a biased coin](http://zackmdavis.net/blog/2019/05/the-typical-set/) that lands Heads 0.6 of the time, the _single_ most likely sequence is 100 Heads, but there's only one of those and you're _vanishingly_ unlikely to actually see it. The [sequences you'll actually observe will have close to 60 Heads](https://en.wikipedia.org/wiki/Asymptotic_equipartition_property). Each such sequence is individually less probable than the all-Heads sequence, but there are vastly more of them. Similarly, [most of the probability-mass of a high-dimensional multivariate normal distribution is concentrated in a thin "shell" some distance away from the mode](https://www.johndcook.com/blog/2011/09/01/multivariate-normal-shell/), for the same reason. (The _same_ reason: the binomial distribution converges to the normal in the limit of large _n_.) Statistical sex differences are like flipping two different collections of coins with different biases, where the coins represent various traits. Learning the outcome of any individual flip, doesn't tell you which set that coin came from, but [if we look at the aggregation of many flips, we can get _godlike_ confidence](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy-1) as to which collection we're looking at. A single-variable measurement like height is like a single coin: unless the coin is _very_ biased, one flip can't tell you much about the bias. But there are lots of things about people for which it's not that they can't be measured, but that the measurements require _more than one number_—which correspondingly offer more information about the distribution generating them. And knowledge about the distribution is genuinely informative. Occasionally you hear progressive-minded people [dismiss and disdain simpleminded transphobes who believe that chromosomes determine sex](https://archive.is/y5V9i), when actually, most people haven't been karyotyped and don't _know_ what chromosomes they have. Certainly, I agree that almost no one interacts with sex chromosomes on a day-to-day basis; no one even knew that sex chromosomes _existed_ before 1905. [(Co-discovered by a woman!)](https://en.wikipedia.org/wiki/Nettie_Stevens) But the function of [intensional definitions](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) in human natural language isn't to exhaustively [pinpoint](https://www.lesswrong.com/posts/3FoMuCLqZggTxoC3S/logical-pinpointing) a concept in the detail it would be implemented in an AI's executing code, but rather to provide a "treasure map" sufficient for a listener to pick out the corresponding concept in their own world-model: that's why [Diogenes exhibiting a plucked chicken in response to Plato's definition of a human as a "featherless biped"](https://www.lesswrong.com/posts/jMTbQj9XB5ah2maup/similarity-clusters) seems like a cheap "gotcha"—we all instantly know that's not what Plato meant. ["The challenge is figuring out which things are similar to each other—which things are clustered together—and sometimes, which things have a common cause."](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) But sex chromosomes, and to a large extent specifically the [SRY gene](https://en.wikipedia.org/wiki/Testis-determining_factor) located on the Y chromosome, _are_ such a common cause—the root of the [causal graph](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) underlying all _other_ sex differences. A smart natural philosopher living _before_ 1905, knowing about all the various observed differences between women and men, might have guessed at the existence of some molecular mechanism of sex determination, and been _right_. By the "treasure map" standard, "XX is female; XY is male" is a pretty _well-performing_ definition—if you're looking for a [_simple_ membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) that provides a lot of information about the many intricate ways in which females and males statistically differ. Take faces. People are [verifiably very good at recognizing sex from (hair covered, males clean-shaven) photographs of people's faces](/papers/bruce_et_al-sex_discrimination_how_do_we_tell.pdf) (96% accuracy, which is the equivalent of _d_ ≈ 3.5), but we don't have direct introspective access into what _specific_ features our brains are using to do it; we just look, and _somehow_ know. The differences are real, but it's not a matter of any single, simple measurement you could perform with a ruler (like the distance between someone's eyes). Rather, it's a high-dimensional _pattern_ in many measurements you could take with a ruler, no one of which is definitive. [Covering up the nose makes people slower and slightly worse at sexing faces, but people don't do better than chance at guessing sex from photos of noses alone](/papers/roberts-bruce-feature_saliency_in_judging_the_sex_and_familiarity_of_faces.pdf). Notably, for _images_ of faces, we actually _do_ have transformation technology! (Not "magical", because we know how it works.) AI techniques like [generative adversarial networks](https://arxiv.org/abs/1812.04948) and [autoencoders](https://towardsdatascience.com/generating-images-with-autoencoders-77fd3a8dd368) can learn the structure of the distribution of facial photographs, and use that knowledge to synthesize faces from scratch (as demonstrated by [_thispersondoesnotexist.com_](https://thispersondoesnotexist.com/))—or [do things like](https://arxiv.org/abs/1907.10786) sex transformation (as demonstrated by [FaceApp](https://www.faceapp.com/), the _uniquely best piece of software in the world_). If you let each pixel vary independently, the space of possible 1024x1024 images is 1,048,576-dimensional, but the vast hypermajority of those images aren't photorealistic human faces. Letting each pixel vary independently is the wrong way to think about it: changing the lighting or pose changes a lot of pixels in what humans would regard as images of "the same" face. So instead, our machine-learning algorithms learn a [compressed](https://www.lesswrong.com/posts/ex63DPisEjomutkCw/msg-len) representation of what makes the tiny subspace (relative to images-in-general) of faces-in-particular similar to each other. That [latent space](https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d) is a lot smaller (say, 512 dimensions), but still rich enough to embed the high-level distinctions that humans notice: [you can find a hyperplane that separates](https://youtu.be/dCKbRCUyop8?t=1433) smiling from non-smiling faces, or glasses from no-glasses, or young from old, or different races—or female and male. Sliding along the [normal vector](https://en.wikipedia.org/wiki/Normal_(geometry)) to that [hyperplane](https://en.wikipedia.org/wiki/Hyperplane) gives the desired transformation: producing images that are "more female" (as the model has learned that concept) while keeping "everything else" the same. Two-dimensional _images_ of people are _vastly_ simpler than the actual people themselves in the real physical universe. But _in theory_, a lot of the same _mathematical principles_ would apply to hypothetical future nanotechnology-wielding AI systems that could, like the AI in "Failed Utopia #4-2", synthesize a human being from scratch (this-person-_didn't_-exist-dot-com?), or do a real-world sex transformation (PersonApp?)—and the same statistical morals apply to reasoning about sex differences in psychology and (which is to say) the brain. Daphna Joel _et al._ [argue](https://www.pnas.org/content/112/50/15468) [that](https://www.pnas.org/content/112/50/15468) human brains are "unique 'mosaics' of features" that cannot be categorized into distinct _female_ and _male_ classes, because it's rare for brains to be "internally consistent"—female-typical or male-typical along _every_ dimension. It's true and important that brains aren't _discretely_ sexually dimorphic the way genitals are, but as [Marco del Guidice _et al._ point out](http://cogprints.org/10046/1/Delgiudice_etal_critique_joel_2015.pdf), the "cannot be categorized into two distinct classes" claim seems false in an important sense. The lack of "internal consistency" in Joel _et al._'s sense is exactly the behavior we expect from multivariate normal-ish distributions with different-but-not-vastly-different means. (There aren't going to be many traits where the sexes are like, _four_ or whatever standard deviations apart.) It's just like how sequences of flips of a Heads-biased and Tails-biased coin are going to be unique "mosaics" of Heads and Tails, but pretty distinguishable with enough flips—and indeed, with the right stats methodology, [MRI brain scans can predict sex at 96.8% accuracy](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6374327/). Sex differences in the brain are like sex differences in the skeleton: anthropologists can tell female and male skeletons apart (the [pelvis is shaped differently](https://johnhawks.net/explainer/laboratory/sexual-dimorphism-pelvis), for obvious reasons), and [machine-learning models can see very reliable differences that human radiologists can't](/papers/yune_et_al-beyond_human_perception_sexual_dimorphism_in_hand_and_wrist_radiographs.pdf), but neither sex has entire _bones_ that the other doesn't, and the same is true of brain regions. (The evopsych story about complex adaptations being universal-up-to-sex suggests that sex-specific bones or brain regions should be _possible_, but in a bit of _relative_ good news for antisexism, apprently evolution didn't need to go that far. Um, in humans—a lot of other mammals actually have [a penis bone](https://en.wikipedia.org/wiki/Baculum).) Maybe this should just look like supplementary Statistics Details brushed over some basic facts of human existence that everyone knows? I'm a pretty weird guy, in more ways than one. I am not prototypically masculine. Most men are not like me. If I'm allowed to cherry-pick what measurements to take, I can name ways in which my mosaic is more female-typical than male-typical. (For example, I'm _sure_ I'm above the female mean in [Big Five Neuroticism](https://en.wikipedia.org/wiki/Big_Five_personality_traits).) ["[A] weakly negative correlation can be mistaken for a strong positive one with a bit of selective memory."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) But "weird" represents a much larger space of possibilities than "normal", much as [_nonapples_ are a less cohesive category than _apples_](https://www.lesswrong.com/posts/2mLZiWxWKZyaRgcn7/selling-nonapples). If you _sum over_ all of my traits, everything that makes me, _me_—it's going to be a point in the _male_ region of the existing, unremediated, genderspace. In the course of _being myself_, I'm going to do more male-typical things than female-typical things, not becuase I'm _trying_ to be masculine (I'm not), and not because I "identify as" male (I don't—or I wouldn't, if someone could give me a straight answer as to what this "identifying as" operation is supposed to consist of), but because I literally in-fact am male in the same sense that male chimpanzees or male mice are male, whether or not I like it (I don't—or I wouldn't, if I still believed that preference was coherent), and whether or not I _notice_ all the little details that implies (I almost certainly don't). Okay, maybe I'm _not_ completely over my teenage religion of psychological sex differences denialism?—that belief still feels uncomfortable to put my weight on. I would _prefer_ to believe that there are women who are relevantly "like me" with respect to some fair (not gerrymandered) metric on personspace. But, um ... it's not completely obvious whether I actually know any? (Well, maybe two or three.) When I look around me—most of the people in my robot cult (and much more so if you look the core of old-timers from the _Overcoming Bias_ days, rather than the greater Berkeley "community" of today) are male. Most of the people in my open-source programming scene are male. These days, [most of the _women_](/2020/Nov/survey-data-on-cis-and-trans-women-among-haskell-programmers/) in [my open-source programming scene](/2017/Aug/interlude-vii/) are male. Am I not supposed to _notice_? I could _assert_ that it's all down to socialization and stereotyping and self-fulfilling prophecies—and I know that _some_ of it is. (Self-fulfilling prophecies [are coordination equilibria](/2020/Jan/book-review-the-origins-of-unfairness/).) But I still want to speculate that the nature of my X factor—the things about my personality that let me write the things I do even though I'm [objectively not that smart](/images/wisc-iii_result.jpg) compared to some of my robot-cult friends—is a pattern of mental illness that could realistically only occur in males. (Yudkowsky: ["It seems to me that male teenagers especially have something like a _higher cognitive temperature_, an ability to wander into strange places both good and bad."](https://www.lesswrong.com/posts/xsyG7PkMekHud2DMK/of-gender-and-rationality)) I can't assert _with a straight face_ that all the gaps _must_ vanish after the revolution, because _I've read the literature_ and can tell you several observations about chimps and [congenital adrenal hyperplasia](/images/cah_diffs_table.png) that make that seem _relatively unlikely_. I was once told by a very smart friend (who, unlike me, is not a religious fantatic), "Boys like games with challenges and points; girls like games with characters and stories." I said, "I like characters and stories! I think." He said, "I know, but at the margin, you seem suboptimally far in the challenges and points direction. But that's fine; that's what women are for." And what evidence could I point to, to show him that he's _bad and wrong_ for saying that, if he's not already religiously required to believe it? _Alright_. So _in principle_, you could imagine having a PersonApp that maps me to a point in the female region of configuration space in some appropriately structure-preserving way, to compute my female analogue who is as authentically _me_ as possible while also being authentically female, down to her pelvis shape, and the proportion of gray matter in her posterior lateral orbitofrontal cortex, and—the love of a woman for a man. What is she like, concretely? Do I know how to imagine that? Or if I can imagine it, can I _describe_ it in this blog post? I am presently sorrowful that [(following John Holt)](https://www.greaterwrong.com/posts/S8ysxzgraSeuBXnpk/rationality-quotes-july-2009/comment/DtyDzN5etD4woXtFM) we all know more than we can say. I have mental models of people, and the models get queried for predictions in the course of planning my social behavior, but I don't have introspective access to the differences between models. It's easier to imagine people in hypothetical situations and say things like, "That doesn't sound like something she'd _do_, but _he_ would" (and be correct), than to say exactly it is about her character and his that generated these predictions, such that [my words would paint a picture in your head](https://www.lesswrong.com/posts/YF9HB6cWCJrDK5pBM/words-as-mental-paintbrush-handles) that would let you make your own predictions about her and him without having met them—just like how you're better at recognizing someone's face, than at describing their face in words in enough detail for an artist to draw a portrait. As a _first-order approximation_, I do have a sister. I think the family resemblance between us is stronger than with either parent. We're about equally intelligent—OK, she's probably smarter than me; [the SAT is pretty](https://www.gwern.net/docs/iq/2004-frey.pdf) [_g_-loaded](/2020/Apr/book-review-human-diversity/#the-length-of-a-hyperellipsoid) and her 1580 (out of 1600) trounces my 2180 (on [the out-of-2400 scale used between 2005 and 2016](https://en.wikipedia.org/wiki/SAT#2005_changes,_including_a_new_2400-point_score), such that 2180 proportionally scales down to 1453 out of 1600). Our dark hair curls into helices with similar radius. We even have similar mannerisms, I think? She's 5′6½″. But in a lot of ways that matter, we are _very_ different people. When you compare resumés and representative work-samples of what we've _done_ with our (roughly) similar intelligence—her chemistry Ph.D. from a top-10 university, my dropout–autodidact's passion culminating in this _batshit insane_ secret ("secret") blog about the philosophy of science and the etiology of late-onset gender dysphoria in males—it ... paints a different picture. Of course same-sex siblings would _also_ be different pictures. (Identical twins aren't _duplicates_ of each other, either.) But the advantage of having a sister is that it gives my brain's pattern-matching faculties a target to [sight](https://en.wikipedia.org/wiki/Sight_(device)) against. As a _second_-order approximation, my female analogue is close to being somewhere on the vector in personspace between me and my sister (but not exactly on that line, because the line spans both the difference-betwen-siblings and the difference-between-sexes). (All this is in accordance with ["Everything is a vector space" philosophy](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) implied by this blog's [TLD](https://en.wikipedia.org/wiki/Top-level_domain)—if it turns out that something _isn't_ a vector space, I'm not sure I want to know about it. I can hope that my description of the _methodology_ is valuable, even if your brain's pattern-matching faculties can't follow along with the same example, because you haven't met my sister and only know the aspects of me that shine through to the blog.) Okay. Having supplied just enough language to _start_ to talk about what it would even mean to actually become female—is that what I _want_? I mean, if it's reversible, I would definitely be extremely eager to _try_ it ... I had said we're assuming away engineering difficulties in order to make the thought experiment more informative about pure preferences, but let's add one constraint to _force_ the thought experiment to be informative about preferences, and not allow the wishy-washy evasion of "I'm eager to _try_ it." What if I can't just "try" it? What if the machine can only be used once? Or (my preference) if some deep "brain sex" transformation only works once, even if a more superficial motor remapping is easy to do or re-do? Come up with whatever frame story you want for this: maybe the machine costs my life savings just to rent for two minutes, or maybe the transformation process is ever-so-slightly imperfect, such that you can't re-transform someone who's already been transformed once, like a photocopy being a perfectly acceptable substitute for an original document, but photocopies-of-photocopies rapidly losing quality. In that case, if I have to choose ... I _don't_ think I want to be Actually Female? I _like_ who I am on the inside, and don't need to change it. I don't _want_ to stop loving challenges and points—or women!—in the way that I do. And if I don't know enough neuroscience to have an _informed_ preference about the ratio of gray to white matter in my posterior lateral orbitofrontal cortex, I'm sure it's _probably fine_. At the same time, the idea of having a female body still seems like _the most appealing thing in the world_. If artificial superintelligence gives me BodyApp to play with for a subjective year and tiles the _rest_ of our future lightcone with paperclips, that's _fine_; I will die _happy_. So, I guess ... If I'm being _really_ honest with myself here ... And I successfully make-believe that I can tell the truth with no consequences on my secret ("secret") blog even though at this point my paper-thin pseudonymity is more like a genre convention rather than providing any real privacy ... I guess I _want_ to be "a normal [...] man wearing a female body like a suit of clothing." Is that weird? Is that wrong? Okay, yes, it's _obviously_ weird and wrong, but should I care more about not being weird and wrong, than I do about my deepest most heartfelt desire that I've thought about every day for the last eighteen years? This is probably counterintuitive if you haven't been living with it your entire adult life? People have _heard of_ the "born in the wrong body" narrative, which makes intuitive sense: if female souls are designed to work female bodies, and you're a female soul tethered to a male body, you can imagine the soul finding the mismatch distressing and wanting to fix it. But if, as I'm positing for my case, there _is no mismatch_ in any objective sense, then where does the desire come from? How do you make sense of wanting to change physiological sex, for reasons that _don't_ have anything to do with already neurologically resembling that sex? What's really going on there, psychologically? Part of what makes this so hard to talk about _besides_ it being weird and wrong, is that we don't really understand how our own minds work in a legible way; we just experience things. Even if you're [not sure that other people really see "the same" colors as you](https://www.lesswrong.com/posts/3wYjyQ839MDsZ6E3L/seeing-red-dissolving-mary-s-room-and-qualia) (and you don't know how to [reformulate the question](https://www.lesswrong.com/posts/rQEwySCcLtdKHkrHp/righting-a-wrong-question) [to not](https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question) [be confused](https://www.lesswrong.com/posts/XzrqkhfwtiSDgKoAF/wrong-questions)), you can at least [agree on color _words_](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution) by pointing to [Pantone swatches](https://en.wikipedia.org/wiki/Pantone#Pantone_Color_Matching_System), but I'm not sure I have the language to convey the facts about the qualia I associate with the word _autogynephilia_ to someone who doesn't already feel something similar. But I have to try. A clue: when I'm ... uh. When I'm—well, you know ... (I guess I can't evade responsibility for the fact that I am, in fact, blogging about this.) A clue: when I'm masturbating, and imagining all the forms I would take if the magical transformation technology were real (the frame story can vary, but the basic idea is always the same), I don't think I'm very _good_ at first-person visualization? The _content_ of the fantasy is about _me_ being a woman (I mean, having a woman's body), but the associated mental imagery mostly isn't the first-person perspective I would actually experience if the fantasy were real; I think I'm mostly imagining a specific woman (which one, varies a lot) as from the outside, admiring her face, and her voice, and her breasts, but somehow wanting the soul behind those eyes to be _me_. Wanting _my_ body to be shaped like _that_, to be in control of that avatar of beauty—not even to _do_ anything overtly "sexy" in particular, but just to exist like that. If the magical transformation technology were real, I would want a full-length mirror. (And in the real world, I would probably crossdress a _lot_ more often, if I could pass to myself in the mirror. My face ruins it and makeup doesn't help.) What's going on here? _Speaking_ of mirrors, the sexologist [James Cantor speculates](https://youtu.be/q3Ub65CwiRI?t=281): mirror neurons. Way, way back in the 1980s, Italian neuroscientists wired up the brains of macaque monkeys with electrodes, and noticed that some of the _same_ brain regions would light up when the monkey grabbed a rasin, and when the monkey watched the _researcher_ eat a rasin. These "mirror neurons" are speculated to form the basis of empathy. So, the _phrase_ "mirror neurons" is not and _cannot_ be an answer. Real understanding is about detailed predictive models, not [what words to repeat back in school](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password). I can't expect to understand the real answer without spending multiple years studying neuroscience, and if I did, I couldn't expect to transmit the model to you in one blog post. (That would be _several_ blog posts.) Still, the macaque–rasin anecdote is at least _suggestive_ of hypotheses in the _general area_ of, "The brain uses _shared_ representations for 'self' and others, in a way such that it's possible for the part of the brain that computes sexual attraction to 'get confused' about the self–other distinction in a way that manifests as sexual desire to _be_ the object of attraction." Or _something like that_. One interesting prediction of this story is that if the nature of the "confusion", this—["erotic target location error"](/papers/lawrence-etle_an_underappreciated.pdf) (ETLE)?—is agnostic to the object of sexual attraction, then you should see the same pattern in men with unusual sexual interests. ("Men" because I think we legitimately want to be [shy about generalizing across sexes](/papers/bailey-what_is_sexual_orientation_and_do_women_have_one.pdf) for sex differences in the parts of the mind that are specifically about mating.) And this is actually what we see. Most men are attracted to women, but some fraction of them get off on the idea of _being_ women—autogynephilia. So if some men are attracted to, say, amputees, we would expect some fraction of _them_ to [get off on the idea of _being_ amputees](/papers/lawrence-clinical_and_theoretical_paralells.pdf)—[_apotemnophilia_](https://en.wikipedia.org/wiki/Body_integrity_dysphoria#History). Some men are, unfortunately, pedophiles, and [some fraction of them get off on the idea of being children](/papers/hsu-bailey-autopedophilia.pdf). Some men are interested in anthropomorphic animals, and [_being_ anthropomorphic animals](https://www.gwern.net/docs/psychology/2019-hsu.pdf)—["furries"](https://en.wikipedia.org/wiki/Furry_fandom). Once I had an occasion [(don't ask)](https://www.greaterwrong.com/posts/uwBKaeQzsvkcErmBm/ialdabaoth-is-banned/comment/PqZ2NFfj2b2dJoZ9N) to look up if there was a word for having a statue fetish. Turns out it's called _agalmatophilia_, [defined by _Wikipedia_ as](https://en.wikipedia.org/wiki/Agalmatophilia) "sexual attraction to a statue, doll, mannequin or other similar figurative object", which "may include a desire for actual sexual contact with the object, a fantasy of having sexual (or non-sexual) encounters with an animate or inanimate instance of the preferred object, the act of watching encounters between such objects, or"—_wait for it_ ... "sexual pleasure gained from thoughts of being transformed or transforming another into the preferred object." I don't think the _Wikipedia_ editor who wrote that last phrase was being a shill for the general ETLE hypothesis because it has political implications; I think "among guys who are interested in _X_, some fraction of them want to be _X_" is just _something you notice_ when you honestly look at the world of guys who are interested in arbitrary _X_. And, and—I've never told anyone this and have barely thought about it in years, but while I'm blogging about all this anyway—I have a few _vague_ memories from _early_ teenagerhood of having transformation fantasies about things other than women.. Like wondering (while masturbating) what it would like to be a dog, or a horse, or a marble statue of a woman. Anyway, I lost interest in those before too long, but I think this vague trace-of-a-memory is evidence for me the thing going on with me being an underlying ETLE-like predisposition rather than an underlying intersex condition. I don't _know_ the details of what this "erotic target location error" thing is supposed to _be_, exactly—and would expect my beliefs to change a lot if _anyone_ knew the details and could explain them to me—but I think _some story in this general vicinity_ has to be the real explanation of what's going on with me. How _else_ do you make sense of an otherwise apparently normal biological male (whose physical and psychological traits seem to be basically in the male normal range, even if he's [one of those sensitive bookish males](http://unremediatedgender.space/2020/Sep/link-wells-for-boys/) rather than being "macho") having the _conjunction_ of the beautiful pure sacred self-identity thing _and_, specifically, erotic female-transformation fantasies of the kind I've described? Am I supposed to claim to be a lesbian trapped inside a man's body? That I _am_ neurologically female in some real sense, and that's the true cause of my beautiful pure sacred self-identity thing? _Maybe_ that could be spun to seem superficially plausible to those who know me casually, but I don't know how to square that account with the _details_ of my inner life (including the details that I wouldn't blog about if I didn't have to). I think if you used magical transformation technology to put an actual lesbian in a copy of my body, I can imagine her/him having [Body Horror](https://tvtropes.org/pmwiki/pmwiki.php/Main/BodyHorror) at her/his alien new form and wish to be restored to her/his original body on _that_ account, and maybe her/his identification with her/his former sex ("gender") would look _sort of_ like my beautiful pure sacred self-identity thing (if you squint). But I _don't_ think she/he would spontaneously invent obsessively jacking off to fantasies of being able to magically transform into various _different_ female bodies ... unless she was _already_ into that stuff before being magically transformed into my twin. But ... is that even a thing among many (or any) lesbians? To be clear, there is a _lot_ of porn in this genre! But it seems to mostly be created for and consumed by ... men? Adult human males? I just don't see any _reason_ to doubt the obvious explanation that the root cause of my gender problems is specifically a bug in _male_ sexuality. I didn't have the fancy vocabulary for it then, but the basic idea seemed pretty obvious in 2005, and seems equally obvious now. (A "bug" with respect to the design criteria of evolution, not with respect to the human morality that affirms that I _like_ being this way. Some, fearing stigma, would prefer to tone-police "bug" down to "variation", but people who don't [understand the naturalistic fallacy](https://www.lesswrong.com/posts/YhNGY6ypoNbLJvDBu/rebelling-within-nature) aren't going to understand anything _else_ I'm saying, and I want to emphasize that the mirror-neurons-or-whatever and ordinary male heterosexuality weren't functionally optimized to collide like this.) But it might not be obvious to _everyone_. The detailed exposition above about what it would even mean to change sex is the result of a _lot_ of thinking influenced by everything I've read and learned—and in particular, the reductionist methodology I learned from Yudkowsky, and in even more particular, the very specific warning in "Changing Emotions" (and its predecessor in the Extropians mailing-list archives) that changing sex is a _hard problem_. We can imagine that a male who was _like_ me in having this erotic-target-location-erroneous sexuality and associated beautiful pure sacred self-identity feelings, but who [read different books in a different order](/2020/Nov/the-feeling-is-mutual/), might come to very different conclusions about himself. If you don't have the conceptual vocabulary to say, "I have a lot of these beautiful pure sacred self-identity feelings about being female, but it seems like a pretty obvious guess that there must be some sort of causal relationship between that and this erotic fantasy, which is realistically going to be a variation in _male_ sexuality," you might end up saying something simpler like, "I want to be a woman." Or possibly even, "I _am_ a woman, on the inside, where it counts." (As Yudkowsky [occasionally](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) [remarks](https://www.lesswrong.com/posts/f4RJtHBPvDRJcCTva/when-anthropomorphism-became-stupid), our _beliefs about_ how our minds work have very little impact on how they actually work. Aristotle thought the brain was an organ for cooling the blood, but he was just wrong; the theory did not _become true of him_ because he believed it.) What theory I end up believing about myself _matters_, because different theories that purport to explain the same facts can make very different predictions about facts not yet observed, or about the effects of interventions. If I have some objective inner female gender as the result of a brain-intersex condition, then getting on, and _staying_ on, feminizing hormone replacement therapy (HRT) would presumably be a good idea specifically because my brain is designed to "run on" estrogen. But if my beautiful pure sacred self-identity feelings are fundamentally a misinterpretation of misdirected _male_ sexuality, then it's not clear that I _want_ the psychological effects of HRT: if there were some unnatural way to give me a female body (or just more female-_like_) _without_ messing with my internal neurochemistry, that would actually be _desireable_. Or, you might think that if the desire is just a confusion in male sexuality, maybe real life body-modding _wouldn't_ be desirable? Maybe autogynephilic men _think_ they want female bodies, but if they actually transitioned in real life (as opposed to just having incompetently non-realistic daydreams about it all day and especially while masturbating), they would feel super-dysphoric about it, because (and which proves that) they're just perverted men, and not actual trans women, which are a different thing. You might think so! But, empirically, I did grow (small) breasts as a result of [my five-month HRT experiment](/2017/Sep/hormones-day-156-developments-doubts-and-pulling-the-plug-or-putting-the-cis-in-decision/), and I think it's actually been a (small) quality-of-life improvement for approximately the reasons I expected going in. I just—like the æsthetic?—and wanted it to be part of _my_ æsthetic, and now it is, and I don't quite remember what my chest was like before, kind of like how I don't quite remember what it was like to have boy-short hair before I grew out my signature beautiful–beautiful ponytail. (Though I'm _still_ [kicking myself for not](/2017/Nov/laser-1/) taking a bare-chested "before" photo.) I don't see any particular reason to believe this experience wouldn't replicate all the way down the [slope of interventions](/2017/Jan/the-line-in-the-sand-or-my-slippery-slope-anchoring-action-plan/). Fundamentally, I think I can make _better decisions_ for myself by virtue of having an accurate model of what's really going on with me—a model that uses all these fine mental distinctions using the everything-is-a-vector-space skill, such that I have the language to talk about my obsessive paraphilic desire to be shaped like a woman without wanting to actually be a woman, similarly to how the _verthandi_ in "Failed Utopia #4-2" aren't actually women. If the _actual_ desire implemented in one's actual brain in the real physical universe takes the form of (roughly translating from desire into English) "You know, I kind of want my own breasts (_&c._)", it may be weird and perverted to _admit_ this and act on it (!!)—but would it be any _less_ weird and perverted to act on it under the false (in my case) pretense of an invisible female gender identity? If you know what the thing is, can it be any worse to just _own it_? If we _actually had_ magical perfect transformation technology or something close to it—if you could grow a female body in a vat, and transfer my brain into it, and had a proven solution to the motor-mapping and skull-size issues—if it cost $250,000, I would take out a bank loan and _do it_, and live happily ever after. Since we _don't_ have that ... the existing approximations don't really seem like a good idea for me, all things considered? As a computer programmer, I have learned to fear complexity and dependencies. If you've ever wondered why it seems like [all software is buggy and terrible](https://danluu.com/everything-is-broken/), it's because _no one knows what they're doing_. Each individual programmer and engineer understands their _piece_ of the system well enough that companies can ship products that mostly do what they claim, but there's a lot of chaos and despair where the pieces don't quite fit, and no one knows why. (Maybe _someone_ could figure it out in a reasonable amount of time, but the user who is suffering and in pain has no way of buying their attention.) But computing is the _easy_ case, a universe entirely of human design, of worlds that can be made and unmade on a whim (when that whim is specified in sufficient detail). Contrast that to the unfathomable messiness of evolved biological systems, and I think I have [reason to be wary](https://www.nickbostrom.com/evolution.pdf) of signing up to be a _lifelong medical patient_. Not out of any particular distrust of doctors and biomedical engineers, but out of respect that their jobs—not necessarily the set of tasks they do to stay employed at actually existing hospitals and corporations, but the idealized Platonic forms of _their jobs_—are _much harder_ than almost anyone realizes. _All_ drugs have side-effects; _all_ surgeries have the potential for complications. Through centuries of trial and error (where "error" means suffering and disfigurement and death), our civilization has accumulated a suite of hacks for which the benefits seem to exceed the costs (given circumstances you would prefer not to face in the first place). In a miracle of science, someone made the observations to notice that human females have higher levels of [(8R,9S,13S,14S,17S)-13-Methyl-6,7,8,9,11,12,14,15,16,17-decahydrocyclopenta[a]phenanthrene-3,17-diol](https://en.wikipedia.org/wiki/Estradiol) than human males. In a glorious exhibition of mad science, someone did the experiments to notice that artificially synthesizing that ...-iol (or collecting it from [pregnant horses' urine](https://www.fundforanimals.org/duchess-sanctuary/about-the-duchess-sanctuary/pregnant-mare-urine.html)) and administering it to males successfully pushes some aspects of their phenotype in the female direction: [breast growth and fat redistribution and agreeableness—at the cost of increased risk of venous thromboembolism and osteoporosis](https://srconstantin.github.io/2016/10/06/cross-sex-hormone-therapy.html). For all that my body is disappointingly male and therefore ugly, it _works_. It makes the hormones that it needs to function without me needing to [dissolve a pill under my tongue](/2017/Jul/whats-my-motivation-or-hormones-day-89/) every day—without saddling me with extra dependencies on the supply chains that make the pills, or the professional apparatus to draw my blood and tell me what pills to take—without me needing to know what "hormones" _are_. For all that my penis is boring at best and annoying at worst, it _works_. The organ does the things that it's designed to do; it lets me pee while standing up, and reward myself while pretending that it isn't there. Did you know that trans women [have to dilate their neovagina after bottom surgery](https://www.mtfsurgery.net/dilation.htm)? Yeah. There are these hard tubes of various widths, and you're supposed to stick them up there multiple times a day after surgery (and weekly indefinitely) to prevent the cavity from losing depth. I'm told that there are important technical reasons why it would be objectively wrong to use the phrase _open wound_ in this situation, but the body doesn't know the important technical reasons and you still need to dilate. I am glad that these interventions _exist_ for the people who are brave and desperate enough to need them. But given that I'm not that desperate and not that brave, would it not be wiser to trust the paraphrased proverb and not look a gift man in the mouth? My beautiful–beautiful ponytail was a _smart move_ (and hair length isn't sexually dimorphic anyway; it's only our culture's sexism that makes it seem relevant in this context). My [five-month HRT experiment](/tag/hrt-diary/) was a _smart move_, both for the beautiful–beautiful breast tissue, and [For Science](https://tvtropes.org/pmwiki/pmwiki.php/Main/ForScience). My [laser hair removal sessions](/tag/lasers/) were ... arguably a waste of money, since I still have to shave even after 13 treatments?—but it at least got the density of my ugly–gross facial hair down a bit. Trying it was definitely a _smart move_ given what I knew at the time, and I _just might_ be rich enough and disgusted-by-facial-hair enough to go back for more density-reduction. (Electrolysis gets better results than laser, but it's more expensive and a lot more painful.) People get cosmetic surgery sometimes for non-sex-change-related reasons. I guess if I grew a little braver and a little more desperate, I could imagine wanting to research if and how "mild" facial feminization surgery is a thing—just, selfishly, to be happier with my reflection. (Probably a _smarter move_ to check out [movie-grade latex masks](https://www.creafx.com/en/special-make-up-effects/taylor-silicone-mask/) first, to see if it's at all possible to attain the bliss of passing in the mirror _without_ taking a knife to my one and only real-life face.) And I should probably look into [figuring out if there's anything to be done](https://en.wikipedia.org/wiki/Pattern_hair_loss#Treatment) for my hairline before it gets any worse? But _staying_ on transition-grade HRT indefinitely—doesn't seem like a smart move? Even though I would be happy with the fat-redistribution effects, I don't expect the health effects to be net-positive, and I don't expect the psychological effects to be net-desirable (even if I [wasn't](/2017/Jan/hormones-day-33/) [self-aware](/2017/Jul/whats-my-motivation-or-hormones-day-89/) enough to notice much besides libido change during my five-month experiment). And _social_ transition—really doesn't seem like a smart move? If we _actually had_ magical perfect transformation technology, that would happen automatically (people are pretty good at noticing each other's sex), and I would expect to be very happy. (After some socio-psychological adjustment period; remember, in the real world, I didn't even manage to change _nicknames_.) But given that we _don't_ have magical perfect transformation technology, the main objection here is that I _don't expect to pull off_ that kind of ... perma-[LARP](https://en.wikipedia.org/wiki/Live_action_role-playing_game). I mean _really_ pull it off—everyone in Berkeley and Portland will be very careful to respect your pronouns the minute you come out, but [_they will be lying_](/2019/Dec/reply-to-ozymandias-on-fully-consensual-gender/). I know, because I lie. Of course I _say_ "she" when [the intelligent social web](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web) requires it—I'm not a _monster_—but it's only on a case-by-case basis whether I _believe_ it. It's definitely [_possible_ to pass alright](/2018/Oct/the-information-theory-of-passing/) with a lot of work ([voice training for trans women](https://en.wikipedia.org/wiki/Voice_therapy_(transgender)#Voice_feminization) is a thing!), but it's not clear why I would want to put in all that work, when overall, my life is fundamentally _okay_ as ... a man? An adult human male? As a matter of objective fact, which doesn't care about my beautiful pure sacred self-identity feelings. How dumb would I have to think you are, to expect you not to notice? And how dumb would you have think I am, to expect me to expect you to _pretend_ not to notice? ------- Even if I never took the beautiful pure sacred self identity thing too literally, owning it for what it really is—an illusion, the scintillating but ultimately untrue thought—takes a different tone in the harsh light of my deconversion from psychological-sex-differences denialism. In "Changing Emotions", Yudkowsky wrote— > If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her "me". The change is too sharp, if it happens all at once. In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions/comment/4pttT7gQYLpfqCsNd)— > Is it cheating if you deliberately define your personal identity such that the answer is _No_? To which I now realize the correct answer is—_yes!_ Yes, it's cheating! Category-membership claims of the form "X is a Y" [represent hidden probabilistic inferences](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences); inferring that entity X is a member of category Y means [using observations about X to decide to use knowledge about members of Y to make predictions about features of X that you haven't observed yet](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes). But this AI trick can only _work_ if the entities you've assigned to category Y are _actually_ similar—if they form a tight cluster in configuration space, such that using the center of the cluster to make predictions about unobserved features gets you _close_ to the right answer, on average. The rules don't change when the entity X happens to be "my female analogue" and the category Y happens to be "me". The ordinary concept of "personal identity" tracks how the high-level features of individual human organisms are stable over time. You're going to want to model me-on-Monday and me-on-Thursday as "the same" person even if my Thursday-self woke up on the wrong side of bed and has three whole days of new memories. When interacting with my Thursday-self, you're going to be using your existing mental model of me, plus a diff for "He's grumpy" and "Haven't seen him in three days"—but that's a _very small_ diff, compared to the diff between me and some other specific person you know, or the diff between me and a generic human who you don't know. In everyday life, we're almost never in doubt as to which entities we want to consider "the same" person (like me-on-Monday and me-on-Thursday), but we can concoct science-fictional thought experiments that force [the Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) to come up. What if you could _interpolate_ between two people—construct a human with a personality "in between" yours and mine, that had both or some fraction of each of our memories? (You know, like [Tuvix](https://memory-alpha.fandom.com/wiki/Tuvix_(episode)).) At what point on the spectrum would that person be me, or you, or both, or neither? (Derek Parfit has [a book](https://en.wikipedia.org/wiki/Reasons_and_Persons#Personal_identity) with lots of these.) People _do_ change a lot over time; there _is_ a sense in which, in some contexts, we _don't_ want to say that a sixty-year-old is the "same person" they were when they were twenty—and forty years is "only" 4,870 three-day increments. But if a twenty-year-old were to be magically replaced with their sixty-year-old future self (not just superficially wearing an older body like a suit of clothing, but their brain actually encoding forty more years of experience and decay) ... well, there's a reason I reached for the word "replace" (suggesting putting a _different_ thing in something's place) when describing the scenario. That's what Yudkowsky means by "the change is too sharp"—the _ordinary_ sense in which we model people as the "same person" from day to day (despite people having [more than one proton](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) in a different place from day to day) has an implicit [Lipschitz condition](https://en.wikipedia.org/wiki/Lipschitz_continuity) buried in it, an assumption that people don't change _too fast_. The thing about Sorites problems is that they're _incredibly boring_. The map is not the territory. The distribution of sand-configurations we face in everyday life is such that we usually have an answer as to whether the sand "is a heap" or "is not a heap", but in the edge-cases where we're not sure, arguing about whether to use the word "heap" _doesn't change the configuration of sand_. You might think that if [the category is blurry](https://www.lesswrong.com/posts/dLJv2CoRCgeC2mPgj/the-fallacy-of-gray), you therefore have some freedom to [draw its boundaries](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) the way you prefer—but [the cognitive function of the category is for making probabilistic inferences on the basis of category-membership](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), and those probabilistic inferences can be quantitatively better or worse. [Preferences over concept definitions that aren't about maximizing predictive accuracy are therefore preferences _for deception_](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), because "making probability distributions less accurate in order to achieve some other goal" is exactly what _deception_ means. That's why defining your personal identity to get the answer you want is cheating. If the answer you wanted was actually _true_, you could just say so without needing to _want_ it. When [Phineas Gage's](/2017/Dec/interlude-xi/) friends [said he was "no longer Gage"](https://en.wikipedia.org/wiki/Phineas_Gage) after the railroad accident, what they were trying to say was that interacting with post-accident Gage was _more relevantly similar_ to interacting with a stranger than it was to interacting with pre-accident Gage, even if Gage-the-physical-organism was contiguous along the whole strech of space time. Same principle when Yudkowsky wrote, "If I fell asleep and woke up as a true woman [...] I don't think I'd call her 'me'". The claim is that psychological sex differences are large enough to violate the Lipschitz condition imposed by our _ordinary_ concept of personal identity. Maybe he was wrong, but if so, that cashes out as being wrong _about_ how similar women and men actually are (which in principle could be operationalized and precisely computed, even if _we_ don't know how to make it precise), _not_ whether we prefer the "call her me" or "don't call her me" conclusion and want to _retroactively redefine the meaning of the words in order to make the claim come out "true."_ Do people ever really recover from being religious? I still endorse the underlying psychological motivation that makes me prefer the "call her me" conclusion, the _intention_ that made me think I could get away with defining it to be true. Now that I don't believe that anymore—now that I can't take for granted that actual women aren't a somewhat unfathomable Other onto me—my world hasn't collapsed in the way religious people [tend to fear](https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat) when their most precious belief is threatened. It just means I have to do [a little more intellectual work](https://arbital.greaterwrong.com/p/rescue_utility) to figure out what's actually right. [People can stand what is true, for we are already doing so.](https://www.readthesequences.com/You-Can-Face-Reality) ------- ### Coda > And Durham—the software puppet, the lifeless shell animated by a being from another plane—looked him in the eye and said, "You have to let me show you what you are." > > —_Permutation City_ by Greg Egan Anyway, that—briefly (I mean it)—is the story about my weird obligate sex fantasy about being a woman and how I used to think that it was morally wrong to believe in psychological sex differences, but then I gradually changed my mind and decided that psychological sex differences are probably real after being deeply influenced by this robot-cult blog about the logic of Science. It's probably not that interesting? If we were still living in the socio-political environment of 2009, I'm pretty sure I wouldn't be blogging about my weird sexual obsessions (as evidenced by the fact that, in 2009, I wasn't blogging about them). Imagine my surprise to discover that, in the current year, my weird sexual obsession is suddenly at the center of [one of the _defining political issues of our time_](https://en.wikipedia.org/wiki/Transgender_rights). All this time—the dozen years I spent reading everything I could about sex and gender and transgender and feminism and evopsych and doing various things with my social presentation (sometimes things I regretted and reverted after a lot of pain, like the initials) to try to seem not-masculine—I had been _assuming_ that my gender problems were not of the same kind as people who were _actually_ transgender, because the standard narrative said that that was about people whose ["internal sense of their own gender does not match their assigned sex at birth"](https://www.vox.com/identities/21332685/trans-rights-pronouns-bathrooms-sports), whereas my thing was obviously at least partially an outgrowth of my weird sex fantasy—I had never interpreted the beautiful pure sacred self-identity thing as an "internal sense of my own gender". _Why would I?_ In the English of my youth, "gender" (as a single word, rather than part of the phrase "gender role") was understood as a euphemism for _sex_ for people who were squeamish about the potential ambiguity betweeen _sex_-as-in-biological-sex and _sex_-as-in-intercourse. (Judging by this blog's domain name, I am not immune to this.) In that language, my "gender"—my sex—is male. Not because I'm necessarily happy about it (and I [used to](/2017/Jan/the-erotic-target-location-gift/) be pointedly insistent that I wasn't), but as an observable biological fact that, whatever my pure beautiful sacred self-identity feelings, _I am not delusional about_. Okay, so trans people aren't delusional about their [developmental sex](/2019/Sep/terminology-proposal-developmental-sex/); the claim is that their internal sense of their own gender is in some sense more real or more relevant and should take precedence. So where does that leave me? This post is about my _own_ experiences, and not anyone else's (which I obviously don't have access to). I've _mentioned_ transgenderedness a number of times in the main body of this post, but I've tried to cast it as explanation that one might be tempted to apply to my case, but which I don't think fits. Everything I've said so far is _consistent_ with a world in which Ray Blanchard (who coined the obvious and perfect word for my thing while studying actual transsexuals) was dumb and wrong, a world where my idiosyncratic weird sex perversion and associated beautiful pure sacred self-identity feelings are taxonomically and etiologically distinct from whatever brain-intersex condition causes _actual_ trans women. That's the world I _thought_ I lived in for the ten years after encountering the obvious and perfect word. My first clue that I wasn't living in that world came from—Eliezer Yudkowsky. (Well, not my first _clue_. In retrospect, there were lots of _clues_. My first wake-up call.) In [a 26 March 2016 Facebook post](https://www.facebook.com/yudkowsky/posts/10154078468809228), he wrote— > I'm not sure if the following generalization extends to all genetic backgrounds and childhood nutritional backgrounds. There are various ongoing arguments about estrogenlike chemicals in the environment, and those may not be present in every country ... > Still, for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women. (***!?!?!?!?***) > A lot of them don't know it or wouldn't care, because they're female-minds-in-male-bodies but also cis-by-default (lots of women wouldn't be particularly disturbed if they had a male body; the ones we know as 'trans' are just the ones with unusually strong female gender identities). Or they don't know it because they haven't heard in detail what it feels like to be gender dysphoric, and haven't realized 'oh hey that's me'. See, e.g., and (Reading _this_ post, I _did_ realize "oh hey that's me"—it's hard to believe that I'm not one of the "20% of the ones with penises" Yudkowsky is talking about here—but I wasn't sure how to reconcile that with the "are actually women" (***!?!?!?!?***) characterization, coming _specifically_ from the guy who taught me (in "Changing Emotions") how blatantly, ludicrously untrue and impossible that is.) > But I'm kinda getting the impression that when you do normalize transgender generally and MtF particularly, like not "I support that in theory!" normalize but "Oh hey a few of my friends are transitioning and nothing bad happened to them", there's a _hell_ of a lot of people who come out as trans. > If that starts to scale up, we might see a really, really interesting moral panic in 5-10 years or so. I mean, if you thought gay marriage was causing a moral panic, you just wait and see what comes next ... Indeed—here we are five years later, and _I am panicking_. (As 2007–9 Sequences-era Yudkowsky [taught me](https://www.yudkowsky.net/other/fiction/the-sword-of-good), and 2016 Facebook-shitposting-era Yudkowsky seemed to ignore, the thing that makes a moral panic really interesting is how hard it is to know you're on the right side of it—and the importance of [panicking sideways](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative) [in policyspace](https://www.overcomingbias.com/2007/05/policy_tugowar.html) when the "maximize the number of trans people" and "minimize the number of trans people" coalitions are both wrong.) At the time, this was merely _very confusing_. I left a careful comment in the Facebook thread (with the obligatory "speaking only for myself; I obviously know that I can't say anything about anyone else's experience" [disclaimer](https://www.overcomingbias.com/2008/06/against-disclai.html)), quietly puzzled at what Yudkowsky could _possibly_ be thinking ... A month later, I moved out of my mom's house in [Walnut Creek](https://en.wikipedia.org/wiki/Walnut_Creek,_California) to go live with a new roommate in an apartment on the correct side of the [Caldecott tunnel](https://en.wikipedia.org/wiki/Caldecott_Tunnel), in [Berkeley](https://en.wikipedia.org/wiki/Berkeley,_California): closer to other people in the robot-cult scene and with a shorter train ride to my coding dayjob in San Francisco. (I would later change my mind about which side of the tunnel is the correct one.) In Berkeley, I met a number of really interesting people who seemed quite similar to me along a lot of dimensions, but also very different along some other dimensions having to do with how they were currently living their life! (I see where the pattern-matching facilities in Yudkowsky's brain got that 20% figure from.) This prompted me to do a little bit more reading in some corners of the literature that I had certainly _heard of_, but hadn't already mastered and taken seriously in the previous twelve years of reading everything I could about sex and gender and transgender and feminism and evopsych. (Kay Brown's blog, [_On the Science of Changing Sex_](https://sillyolme.wordpress.com/), was especially helpful.) Between the reading, and a series of increasingly frustrating private conversations, I gradually became persuaded that Blanchard _wasn't_ dumb and wrong, that his taxonomy is _basically_ correct, at least as a first approximation. So far this post has just been about _my_ experience, and not anyone's theory of transsexualism (which I had assumed for years couldn't possibly apply to me), so let me take a moment to explain the theory now. (With the caveated understanding that psychology is complicated and there's more to be said about what "as a first approximation" is even supposed to mean, but I need a few paragraphs to talk about the _simple_ version of the theory that makes _pretty good_ predictions on _average_, before I can elaborate on more complicated theories that might make even better predictions including on cases that diverge from average.) The idea is that male-to-female transsexualism isn't actually one phenomenon; it's two completely different phenomena that don't actually have anything to do with each other, except for the (perhaps) indicated treatment of HRT, surgery, and social transition. (Compare to how different bacterial or viral diseases might happen to respond to the same drug.) In one taxon, the "early-onset" type, you have same-sex-attracted males who have just been extremely feminine (in social behavior, interests, _&c._) their entire lives, in a way that causes huge social problems for them—the far tail of effeminate gay men who end up fitting into Society better as straight women. _That's_ where the "woman trapped inside a man's body" trope comes from. [This one probably _is_ a brain-intersex condition.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3180619/) That story is pretty intuitive. Were an alien AI to be informed of the fact that, among humans, some fraction of males elect to undergo medical interventions to resememble females and aspire to be perceived as females socially, "brain-intersex condition such that they already behave like females" would probably be its top hypothesis for the cause of such behavior, just on priors. Suppose our alien AI were to be informed that many of the human males seeking to become female (as far as the technology can manage, anyway) do _not_ fit the clinical profile of the early-onset type—it looks like there's a separate "late-onset" type or types. If you [didn't have enough data to _prove_ anything, but you had to guess](https://www.lesswrong.com/posts/xTyuQ3cgsPjifr7oj/faster-than-science), what would be your _second_ hypothesis for how this behavior might arise? What's the _usual_ reason for males to be obsessed with female bodies? So, yeah. Basically, I think a _substantial majority_ of trans women under modern conditions in Western countries are, essentially, guys like me who were _less self-aware about what the thing actually is_. So, I realize this is an inflamatory and (far more importantly) _surprising_ claim. Obviously, I don't have introspective access into other people's minds. If someone claims to have an internal sense of her own gender that doesn't match her assigned sex at birth, on what evidence could I _possibly_ have the _astounding_ arrogance to reply, "No, I think you're really just a perverted male like me"? Actually, lots. To arbitrarily pick one particularly vivid exhibition, in April 2018, the [/r/MtF subreddit](https://www.reddit.com/r/MtF/) (which currently has 100,000 subscribers) [posted a link to a poll: "Did you have a gender/body swap/transformation "fetish" (or similar) before you realised you were trans?"](https://archive.is/uswsz). The [results of the poll](https://strawpoll.com/5p7y96x2/r): [_82%_ said Yes](/images/did_you_have-reddit_poll.png). [Top comment in the thread](https://archive.is/c7YFG), with 232 karma: "I spent a long time in the 'it's probably just a fetish' camp". Certainly, 82% is not 100%! (But 82% is evidence for my claim that a _substantial majority_ of trans women under modern conditions in Western countries are essentially guys like me.) Certainly, you could argue that Reddit has a sampling bias such that poll results and karma scores from /r/MtF fail to match the distribution of opinion among real-world MtFs. But if you don't take the gender-identity story as a _axiom_ and [_actually look_](https://www.lesswrong.com/posts/SA79JMXKWke32A3hG/original-seeing) at the _details_ of what people say and do, these kinds of observations are _not hard to find_. You could [fill an entire subreddit with them](https://archive.is/ezENv) (and then move it to [independent](https://ovarit.com/o/ItsAFetish/) [platforms](https://saidit.net/s/itsafetish/) when the original gets [banned for "promoting hate"](https://www.reddit.com/r/itsafetish/)). Reddit isn't "scientific" enough for you? Fine. The scientific literature says the same thing. [Blanchard 1985](/papers/blanchard-typology_of_mtf_transsexualism.pdf): 73% of non-exclusively androphilic transsexuals acknowledged some history of erotic cross-dressing. (Unfortunately, a lot of the classic studies specifically asked about cross-_dressing_ but the underlying desire isn't about clothes.) [Lawrence 2005](/papers/lawrence-sexuality_before_and_after_mtf_srs.pdf): of trans women who had female partners before sexual reassignment surgery, 90% reported a history of autogynephilic arousal. [Smith _et al._ 2005](/papers/smith_et_al-transsexual_subtypes_clinical_and_theoretical_significance.pdf): 64% of non-homosexual MtFs (excluding the "missing" and "N/A" responses) reported arousal while cross-dressing during adolescence. [Nuttbrock _et al._ 2011](/papers/nuttbrock_et_al-a_further_assessment.pdf): lifetime prevalence of transvestic fetishism among non-homosexual MtFs was 69%. (For a more detailed literature review, see [Kay Brown's blog](https://sillyolme.wordpress.com/faq-on-the-science/) or the first two chapters of [Anne Lawrence's _Men Trapped in Men's Bodies: Narratives of Autogynephilic Transsexualism](https://surveyanon.files.wordpress.com/2017/07/men-trapped-in-mens-bodies_book.pdf).) Peer-reviewed scientific papers aren't enough for you? (They could be cherry-picked; there are lots of scientific journals, and no doubt a lot of bad science slips through the cracks of the review process.) Want something more indicative of a consensus among practitioners? Fine. The [_Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition_](https://en.wikipedia.org/wiki/DSM-5) (the definitive taxonomic handbook of the American Psychiatric Association) [says the same thing](https://sillyolme.wordpress.com/2021/02/06/american-psychiatric-association-supports-the-two-type-transsexual-taxonomy/) in [its section on gender dysphoria](/papers/DSM-V-gender_dysphoria_section.pdf) ([ICD-10-CM codes](https://en.wikipedia.org/wiki/ICD-10-CM) F64.1 and F64.2): > In both adolescent and adult natal males, there are two broad trajectories for development of gender dysphoria: early onset and late onset. _Early-onset gender dysphoria_ starts in childhood and continues into adolescence and adulthood; or, there is an intermittent period in which the gender dysphoria desists and these individuals self-identify as gay or homosexual, followed by recurrence of gender dysphoria. _Late-onset gender dysphoria_ occurs around puberty or much later in life. Some of these individuals report having had a desire to be of the other gender in childhood that was not expressed verbally to others. Others do not recall any signs of childhood gender dysphoria. For adolescent males with late-onset gender dysphoria, parents often report surprise because they did not see signs of gender dysphoria in childhood. Adolescent and adult natal males with early-onset gender dysphoria are almost always sexually attracted to men (androphilic). Adolescents and adults with late-onset gender dysphoria **frequently engage in transvestic behavior with sexual excitement.** (Bolding mine.) [TODO: another clinical perspective: Dr. Will Powers https://www.facebook.com/strohl89/posts/10157396578969598 ] Or consider Anne Vitale's ["The Gender Variant Phenomenon—A Developmental Review"](http://www.avitale.com/developmentalreview.htm), which makes the _same_ observations as Blanchard-and-friends and arrives at essentially the _same_ two-type taxonomy of MtF, but dressed up in socially-desirable language— > As sexual maturity advances, Group Three, cloistered gender dysphoric boys, often combine excessive masturbation (one individual reported masturbating up to 5 and even 6 times a day) with an increase in secret cross-dressing activity to release anxiety. Got that? They _often combine excessive masturbation_ with an _increase in secret cross-dressing activity_ to _release anxiety_—their terrible, terrible _gender expression deprivation anxiety!_ Don't trust scientists or clinicians? Me neither! (Especially [not clinicians](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/).) Want first-person accounts from trans women themselves? Me too! And there's lots! Consider this passage from Dierdre McCloskey's memoir _Crossing_, writing in the third person about her decades identifying as a heterosexual crossdresser before transitioning at age 53: > He had been doing it ten times a month through four decades, whenever possible, though in the closet. The quantifying economist made the calculation: _About five thousand episodes_. [...] At fifty-two Donald accepted crossdressing as part of who he was. True, if before the realization that he could cross all the way someone had offered a pill to stop the occasional cross-dressing, he would have accepted, since it was mildly distracting—though hardly time consuming. Until the spring of 1995 each of the five thousand episodes was associated with quick, male sex. Or consider this passage from Julia Serano's _Whipping Girl_ (I know I [keep](/2017/Dec/lesser-known-demand-curves/) [referencing](/2020/Dec/crossing-the-line/) this book, but it's _so representative_ of the dominant strain of trans activism, and I'm never going to get over the [Fridge Logic](https://tvtropes.org/pmwiki/pmwiki.php/Main/FridgeLogic) of the all [the blatant clues that I somehow missed in 2007](/2016/Sep/apophenia/))— > There was also a period of time when I embraced the word "pervert" and viewed my desire to be female as some sort of sexual kink. But after exploring that path, it became obvious that explanation could not account for the vast majority of instances when I thought about being female in a nonsexual context. "It became obvious that explanation could not account." I don't doubt Serano's reporting of her own phenomenal experiences, but "that explanation could not account" is _not an experience_; it's a _hypothesis_ about psychology, about the _causes_ of the experience. After having seen enough of these _laughable_ denials of autogynephilia, the main question in my mind has become not, _Is the two-type feminine–androphilic/autogynephilic taxonomy of MtF transsexualism approximately true?_ (answer: yes, obviously) and more, _How dumb do you (proponents of gender-identity theories) think we (the general public) are?_ (answer: very, but this assessment is accurate). An important caveat must be made: [different causal/etiological stories could be compatible with the same _descriptive_ taxonomy.](/2021/Feb/you-are-right-and-i-was-wrong-reply-to-tailcalled-on-causality/) You shouldn't confuse my mere ridicule with a serious and rigorous critique of the strongest possible case for "gender expression deprivation anxiety" as a theoretical entity, which would be more work. But hopefully I've shown _enough_ work here, that the reader can perhaps empathize with the temptation to resort to ridicule? Everyone's experience is different, but the human mind still has a _design_. If I hurt my ankle while running and I (knowing nothing of physiology or sports medicine) think it might be a stress fracture, a competent doctor (who's studied the literature and seen many more cases) is going to ask followup questions about my experiences to pin down whether it's stress fracture or a sprain. I can't be wrong about the fact _that_ my ankle hurts (that's a privileged first-person experience), but I can easily be wrong about my _theory about_ why my ankle hurts. Even if human brains vary more than human ankles, the basic epistemological principle applies to a mysterious desire to be female. The question is, do the trans women whose reports I'm considering have a relevantly _different_ psychological condition than me, or do we have "the same" condition, and (at least) one of us is misdiagnosing it? The _safe_ answer—the answer that preserves everyone's current stories about themselves without any need for modification—is "different." That's what I thought before 2016. I think a lot of trans activists would say "the same". And on _that_ much, we can agree. How weasely am I being with these "approximately true" and "as a first approximation" qualifiers and hedges? I claim: not _more_ weasely than anyone who tries to reason about psychology given the knowledge and methodology our civilization has managed to accumulate. Reality has a single level (physics), but [our models of reality have multiple levels](https://www.lesswrong.com/posts/gRa5cWWBsZqdFvmqu/reductive-reference). To get maximally precise predictions about everything, you would have to model the underlying quarks, _&c._, which is impossible. (As [it is](https://www.lesswrong.com/posts/tPqQdLCuxanjhoaNs/reductionism) [written](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression): the map is not the territory, but you can't roll up the territory and put in your glove compartment.) Psychology is very complicated; every human is their own unique snowflake, but it would be impossible to navigate the world using the "every human is their own unique _maximum-entropy_ snowflake; you can't make _any_ probabilistic inferences about someone's mind based on your experiences with other humans" theory. Even if someone were to _verbally endorse_ something like that—and at age sixteen, I might have—their brain is still going to go on to make predictions inferences about people's minds using _some_ algorithm whose details aren't available to introspection. Much of this predictive machinery is going to be instinct bequeathed by natural selection (because predicting the behavior of conspecifics was very useful in the environment of evolutionary adaptedness), but some of it is the cultural accumulation of people's attempts to organize their experience into categories, clusters, diagnoses, taxons. (The cluster-learning capability is _also_ bequeathed by natural selection, of course, but it's worth distinguishing more "learned" from more "innate" content.) There could be situations in psychology where a good theory (not a perfect theory, but a good theory to the precision that our theories about engineering bridges are good) would be described by a 70-node causal graph, but it turns out that some of [the more "important" variables in the graph happen to anti-correlate with each other](https://surveyanon.wordpress.com/2019/10/27/the-mathematical-consequences-of-a-toy-model-of-gender-transition/), such that stupid humans who don't know how to discover the correct 70-node graph, do manage to pattern-match their way to a two-type typology that actually is better, as a first approximation, than pretending not to have a theory. No one matches any particular clinical-profile stereotype _exactly_, but [the world makes more sense when you have language for theoretical abstractions](https://astralcodexten.substack.com/p/ontology-of-psychiatric-conditions) like ["comas"](https://slatestarcodex.com/2014/08/11/does-the-glasgow-coma-scale-exist-do-comas/) or "depression" or "bipolar disorder"—or "autogynephilia". (In some sense it's a matter of "luck" when the relevant structure in the world happens to simplify so much; [friend of the blog](/tag/tailcalled/) Tailcalled argues that [there's no discrete typology for FtM](https://www.reddit.com/r/Blanchardianism/comments/jp9rmn/there_is_probably_no_ftm_typology/) as there is for the two types of MtF, because the various causes of gender problems in females vary more independently and aren't as stratified by age.) So, if some particular individual trans woman writes down her life story, and swears up and down that she doesn't match the feminine/early-onset type, but _also_ doesn't empathize at all with the experiences I've grouped under the concept of "autogynephilia", I don't have any definitive knockdown proof with which to accuse her of lying, because I don't _know_ her, and the true diversity of human psychology is no doubt richer and stranger than my fuzzy low-resolution model of it. But [the fuzzy low-resolution model is _way too good_](https://surveyanon.wordpress.com/2019/04/27/predictions-made-by-blanchards-typology/) not to be pointing to _some_ regularity in the real world, and I expect honest people who are exceptions that aren't well-predicted by the model, to at least notice how well it performs on all the _non_-exceptions. If you're a magical third type of trans woman (where, again, _magical_ is a term of art indicating phenomena not understood) who isn't super-feminine but whose identity definitely isn't ultimately rooted in a fetish, [you should be _confused_](https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist) by the 232 upvotes on that /r/MtF comment about the "it's probably just a fetish" camp—if the person who wrote that comment has experiences like yours, why did they ever single out "it's probably just a fetish" [as a hypothesis to pay attention to in the first place](https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis)? And there's allegedly a whole "camp" of these people? What could _that_ possibly be about?! I _do_ have a _lot_ of uncertainty about what the True Causal Graph looks like, even if it seems obvious that the two-type taxonomy coarsely approximates it. Gay femininity and autogynephilia are obviously very important nodes in the True Graph, but there's going to be more detail to the whole story: what _other_ factors influence people's decision to transition, including [incentives](/2017/Dec/lesser-known-demand-curves/) and cultural factors specific to a given place and time? Cultural attitudes towards men and maleness have shifted markedly in our feminist era. It feels gauche to say so, but ... as a result, conscientious boys taught to disdain the crimes of men may pick up an internalized misandry? I remember one night at the Univerity in Santa Cruz when I had the insight that it was possible to make generalizations about groups of people while allowing for exceptions (in contrast to my previous stance that generalizations about people were _always morally wrong_)—and immediately, eagerly proclaimed that _men are terrible_. Or consider computer scientist Scott Aaronson's account (in his infamous [Comment 171](https://www.scottaaronson.com/blog/?p=2091#comment-326664)) that his "recurring fantasy, through this period, was to have been born a woman, or a gay man [...] [a]nything, really, other than the curse of having been born a heterosexual male, which [...] meant being consumed by desires that one couldn't act on or even admit without running the risk of becoming an objectifier or a stalker or a harasser or some other creature of the darkness." Or there's a piece that makes the rounds on social media occasionally: ["I Am A Transwoman. I Am In The Closet. I Am Not Coming Out"](https://medium.com/@jencoates/i-am-a-transwoman-i-am-in-the-closet-i-am-not-coming-out-4c2dd1907e42), which (in part) discusses the author's frustration at having one's feelings and observations being dismissed on account of being perceived as a cis male. "I hate that the only effective response I can give to 'boys are shit' is 'well I'm not a boy,'" the author laments. And: "Do I even _want_ to convince someone who will only listen to me when they're told by the rules that they have to see me as a girl?" (The "told by the rules that they have to see me" (!) phrasing in the current revision is _very_ telling; [the originally published version](https://archive.is/trslp) said "when they find out I'm a girl".) If boys are shit, and the rules say that you have to see someone as a girl if they _say_ they're a girl, that provides an incentive [on the margin](https://www.econlib.org/library/Enc/Marginalism.html) to disidentify with maleness. Like in another one of my teenage song-fragments— > _Look in the mirror > What's a_ white guy _doing there? > I'm just a spirit > I'm just a spirit > Floating in air, floating in air, floating in air!_ This culturally-transmitted attitude could intensify the interpretation of autogynephilic attraction as a [ego-syntonic](https://en.wikipedia.org/wiki/Egosyntonic_and_egodystonic) beautiful pure sacred self-identity thing (rather than an ego-dystonic sex thing to be ashamed of), or be a source of gender dysphoria in males who aren't autogynephilic at all. To the extent that "cognitive" things like internalized misandry manifesting as cross-gender identification is common (or has _become_ more common in the recent cultural environment), then maybe the two-type taxonomy isn't androphilic/autogynephilic so much as it is androphilic/"not-otherwise-specified": the early-onset type is very behaviorally distinct and has a very straightforward motive to transition (it would be _less_ weird not to); in contrast, it might not be as easy to distinguish autogynephilia from _other_ sources of gender problems in the grab-bag of all males showing up to the gender clinic for any other reason. Whatever the True Causal Graph looks like—however my remaining uncertainty turns out to resolve in the limit of sufficiently advanced psychological science, I think I _obviously_ have more than enough evidence to reject the mainstream ["inner sense of gender"](https://www.drmaciver.com/2019/05/the-inner-sense-of-gender/) story as _not adding up_. Okay, so the public narrative about transness is obviously, _obviously_ false. That's a problem, because almost no matter what you want, true beliefs are more useful than false beliefs for making decisions that get you what you want. Fortunately, Yudkowsky's writing had brought together a whole community of brilliant people dedicated to refining the art of human rationality—the methods of acquiring true beliefs and using them to make decisions that get you what you want. So now that I _know_ the public narrative is obviously false, and that I have the outlines of a better theory (even though I could use a lot of help pinning down the details, and I don't know what the social policy implications are, because the optimal policy computation is a complicated value trade-off), all I _should_ have to do is carefully explain why the public narrative is delusional, and then because my arguments are so much better, all the intellectually serious people will either agree with me (in public), or at least be eager to _clarify_ (in public) exactly where they disagree and what their alternative theory is, so that we can move the state of humanity's knowledge forward together, in order to help the great common task of optimizing the universe in accordance with humane values. Of course, this is kind of a niche topic—if you're not a male with this psychological condition, or a woman who doesn't want to share all female-only spaces with them, you probably have no reason to care—but there are a _lot_ of males with this psychological condition around here! If this whole "rationality" subculture isn't completely fake, then we should be interested in getting the correct answers in public _for ourselves_. Men who fantasize about being women do not particularly resemble actual women! We just—don't? This seems kind of obvious, really? _Telling the difference between fantasy and reality_ is kind of an important life skill?! Notwithstanding that some males might want to make use of medical interventions like surgery and hormone replacement therapy to become facsimiles of women as far as our existing technology can manage, and that a free and enlightened transhumanist Society should support that as an option—and notwithstanding that _she_ is obviously the correct pronoun for people who _look_ like women—it's probably going to be harder for people to figure out what the optimal decisions are if no one is allowed to use language like "actual women" that clearly distinguishes the original thing from imperfect facsimiles?! The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041. Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about (like, say, the role of Bayesian reasoning in the philosophy of language). Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) and [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work. So, a striking thing about my series of increasingly frustrating private conversations and subsequent public Facebook meltdown (the stress from which soon landed me in psychiatric jail, but that's [another](/2017/Mar/fresh-princess/) [story](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)) was the tendency for some threads of conversation to get _derailed_ on some variation of, "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a 2014 blog post by Scott Alexander, the _second_ most prominent writer in our robot cult. So, this _really_ wasn't what I was trying to talk about; _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory in psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words. Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—not just as a formal profession of modesty, but _actually_ wrong in the real world. But this "I can define the word _woman_ any way I want" mind game? _That_ part was _absolutely_ clear-cut. That part of the argument, I knew I could win. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight, in which Yudkowsky pounded home this _exact_ point _over and over and over again_, that word and category definitions are _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"— > ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) > ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) > ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) > ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels) > ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression) > ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) > ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) > ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) > ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) > ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) [TODO: contrast "... Not Man for the Categories" to "Against Lie Inflation"; When the topic at hand is how to define "lying", Alexander Scott has written exhaustively about the dangers of strategic equivocation ("Worst Argument", "Brick in the Motte"); insofar as I can get a _coherent_ posiiton out of the conjunction of "... for the Categories" and Scott's other work, it's that he must think strategic equivocation is OK if it's for being nice to people https://slatestarcodex.com/2019/07/16/against-lie-inflation/ ] So, because I trusted people in my robot cult to be dealing in good faith rather than fucking with me because of their political incentives, I took the bait. I ended up spending three years of my life re-explaining the relevant philosophy-of-language issues in exhaustive, _exhaustive_ detail. At first I did this in the object-level context of gender on this blog, in ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), and the ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/). Later, after [Eliezer Yudkowsky joined in the mind games on Twitter in November 2018](https://twitter.com/ESYudkowsky/status/1067183500216811521) [(archived)](https://archive.is/ChqYX), I _flipped the fuck out_, and ended up doing more [stictly abstract philosophy-of-language work](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [on](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [the](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception) [robot](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution)-[cult](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) [blog](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception). An important thing to appreciate is that the philosophical point I was trying to make has _absolutely nothing to do with gender_. In 2008, Yudkowsky had explained that _for all_ nouns N, you can't define _N_ any way you want, because _useful_ definitions need to "carve reality at the joints." It [_follows logically_](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization) that, in particular, if _N_ := "woman", you can't define the word _woman_ any way you want. Maybe trans women _are_ women! But if so—that is, if you want people to agree to that word usage—you need to be able to _argue_ for why that usage makes sense on the empirical merits; you can't just _define_ it to be true. And this is a _general_ principle of how language works, not something I made up on the spot in order to attack trans people. In 2008, this very general philosophy of language lesson was _not politically controversial_. If, in 2018–present, it _is_ politically controversial (specifically because of the fear that someone will try to apply it with _N_ := "woman"), that's a _problem_ for our whole systematically-correct-reasoning project! What counts as good philosophy—or even good philosophy _pedagogy_—shouldn't depend on the current year! There is a _sense in which_ one might say that you "can" define a word any way you want. That is: words don't have intrinsic ontologically-basic meanings. We can imagine an alternative world where people spoke a language that was _like_ the English of our world, except that they use the word "tree" to refer to members of the empirical entity-cluster that we call "dogs" and _vice versa_, and it's hard to think of a meaningful sense in which one convention is "right" and the other is "wrong". But there's also an important _sense in which_ we want to say that you "can't" define a word any way you want. That is: some ways of using words work better for transmitting information from one place to another. It would be harder to explain your observations from a trip to the local park in a language that used the word "tree" to refer to members of _either_ of the empirical entity-clusters that the English of our world calls "dogs" and "trees", because grouping together things that aren't relevantly similar like that makes it harder to describe differences between the wagging-animal-trees and the leafy-plant-trees. If you want to teach people about the philosophy of language, you should want to convey _both_ of these lessons, against naïve essentialism, _and_ against naïve anti-essentialism. If the people who are widely respected and trusted [(almost worshipped)](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) as the leaders of the systematically-correct-reasoning community, [_selectively_](https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people) teach _only_ the words-don't-have-intrinsic-ontologically-basic-meanings part when the topic at hand happens to be trans issues (because talking about the carve-reality-at-the-joints part would be [politically suicidal](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting)), then people who trust the leaders are likely to get the wrong idea about how the philosophy of language works—even if [the selective argumentation isn't _conscious_ or deliberative](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) and [even if every individual sentence they say permits a true interpretation](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). (As it is written of the fourth virtue of evenness, ["If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider."](https://www.yudkowsky.net/rational/virtues)) _Was_ it a "political" act for me to write about the cognitive function of categorization on the robot-cult blog with non-gender examples, when gender was secretly ("secretly") my _motivating_ example? In some sense, I guess? But if so, the thing you have to realize is— _Everyone else shot first_. That's not just my subjective perspective; the timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My philosophy-of-language work on the robot-cult blog (April 2019–January 2021) was (stealthily) _in response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks). I guess by now the branch is as close to settled as it's going to get? Alexander ended up [adding an edit note to the end of "... Not Man to the Categories" in December 2019](https://archive.is/1a4zV#selection-805.0-817.1), and Yudkowsky would [go on to clarify his position on the philosophy of language in September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228). So, that's nice. [TODO: although I think even with the note, in practice, people are going to keep citing "... Not Man for the Categories" in a way that doesn't understand how the note undermines the main point] But I will confess to being quite disappointed that the public argument-tree evaluation didn't get much further, much faster? The thing you have understand about this whole debate is— _I need the correct answer in order to decide whether or not to cut my dick off_. As I've said, I _currently_ believe that cutting my dick off would be a _bad_ idea. But that's a cost–benefit judgement call based on many _contingent, empirical_ beliefs about the world. I'm obviously in the general _reference class_ of males who are getting their dicks cut off these days, and a lot of them seem to be pretty happy about it! I would be much more likely to go through with transitioning if I believed different things about the world—if I thought my beautiful pure sacred self-identity thing were a brain-intersex condition, or if I still believed in my teenage psychological-sex-differences denialism (such that there would be _axiomatically_ no worries about fitting with "other" women after transitioning), or if I were more optimistic about the degree to which HRT and surgeries approximate an actual sex change. In that November 2018 Twitter thread, [Yudkowsky wrote](https://archive.is/y5V9i): > _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act. This seems to suggest that gender pronouns in the English language as currently spoken don't have effective truth conditions. I think this is false _as a matter of cognitive science_. If someone told you, "Hey, you should come meet my friend at the mall, she is really cool and I think you'll like her," and then the friend turned out to look like me (as I am now), _you would be surprised_. (Even if people in Berkeley would socially punish you for _admitting_ that you were surprised.) The "she ... her" pronouns would prompt your brain to _predict_ that the friend would appear to be female, and that prediction would be _falsified_ by someone who looked like me (as I am now). Pretending that the social-norms dispute is about chromosomes was a _bullshit_ [weakmanning](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) move on the part of Yudkowsky, [who had once written that](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates[;] [a]rguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates." Thanks to the skills I learned from Yudkowsky's _earlier_ writing, I wasn't dumb enough to fall for it, but we can imagine someone otherwise similar to me who was, who might have thereby been misled into making worse life decisions. [TODO: ↑ less aggressive about E.Y.'s motives, just focus on consequences for people like me who trust him] If this "rationality" stuff is useful for _anything at all_, you would _expect_ it to be useful for _practical life decisions_ like _whether or not I should cut my dick off_. In order to get the _right answer_ to that policy question (whatever the right answer turns out to be), you need to _at minimum_ be able to get the _right answer_ on related fact-questions like "Is late-onset gender dysphoria in males an intersex condition?" (answer: no) and related philosophy-questions like "Can we arbitrarily redefine words such as 'woman' without adverse effects on our cognition?" (answer: no). At the cost of _wasting three years of my life_, we _did_ manage to get the philosophy question right! Again, that's nice. But compared to the [Sequences-era dreams of changing the world](https://www.lesswrong.com/posts/YdcF6WbBmJhaaDqoD/the-craft-and-the-community), it's too little, too slow, too late. If our public discourse is going to be this aggressively optimized for _tricking me into cutting my dick off_ (independently of the empirical cost–benefit trade-off determining whether or not I should cut my dick off), that kills the whole project for me. I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_. If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all. In ["The Ideology Is Not the Movement"](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/) (April 2016), Alexander describes how the content of subcultures typically departs from the ideological "rallying flag" that they formed around. [Sunni and Shia Islam](https://en.wikipedia.org/wiki/Shia%E2%80%93Sunni_relations) originally, ostensibly diverged on the question of who should rightfully succeed Muhammad as caliph, but modern-day Sunni and Shia who hate each other's guts aren't actually re-litigating a succession dispute from the 7th century C.E. Rather, pre-existing divergent social-group tendencies crystalized into distinct tribes by latching on to the succession dispute as a [simple membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests). Alexander jokingly identifies the identifying feature of our robot cult as being the belief that "Eliezer Yudkowsky is the rightful caliph": the Sequences were a rallying flag that brought together a lot of like-minded people to form a subculture with its own ethos and norms—among which Alexander includes "don't misgender trans people"—but the subculture emerged as its own entity that isn't necessarily _about_ anything outside itself. No one seemed to notice at the time, but this characterization of our movement [is actually a _declaration of failure_](https://sinceriously.fyi/cached-answers/#comment-794). There's a word, "rationalist", that I've been trying to avoid in this post, because it's the subject of so much strategic equivocation, where the motte is "anyone who studies the ideal of systematically correct reasoning, general methods of thought that result in true beliefs and successful plans", and the bailey is "members of our social scene centered around Eliezer Yudkowsky and Scott Alexander". (Since I don't think we deserve the "rationalist" brand name, I had to choose something else to refer to [the social scene](https://srconstantin.github.io/2017/08/08/the-craft-is-not-the-community.html). Hence, "robot cult.") What I would have _hoped_ for from a systematically correct reasoning community worthy of the brand name is one goddamned place in the whole goddamned world where _good arguments_ would propagate through the population no matter where they arose, "guided by the beauty of our weapons" ([following Scott Alexander](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) [following Leonard Cohen](https://genius.com/1576578)). I think what actually happens is that people like Yudkowsky and Alexander rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and conformity with the local environment)—with the result that if Yudkowsky and Alexander _aren't interested in getting the right answer_ (in public)—because getting the right answer in public would be politically suicidal—then there's no way for anyone who didn't [win the talent lottery](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) to fix the public understanding by making better arguments. It makes sense for public figures to not want to commit political suicide! [TODO: risk factor of people getting drawn in to a subculture that claims to be about reasoning, but is actualy very heavily optimized for cutting boys dicks off. We already get a lot of shit for being right-wing; People use trans as political cover; https://twitter.com/yashkaf/status/1275524303430262790 > "Who cares about a blog for male nerd know-it-alls?" > The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders. https://www.scottaaronson.com/blog/?p=5310 > Another thing that would’ve complicated the picture: the rationalist community’s legendary openness to alternative gender identities and sexualities, before - ] Someone asked me: "If we randomized half the people at [OpenAI](https://openai.com/) to use trans pronouns one way, and the other half to use it the other way, do you think they would end up with significantly different productivity?" But the thing I'm objecting to is a lot more fundamental than the specific choice of pronoun convention, which obviously isn't going to be uniquely determined. Turkish doesn't have gender pronouns, and that's fine. Naval ships traditionally take feminine pronouns in English, and it doesn't confuse anyone into thinking boats have a womb. [Many other languages are much more gendered than English](https://en.wikipedia.org/wiki/Grammatical_gender#Distribution_of_gender_in_the_world's_languages) (where pretty much only third-person singular pronouns are at issue). The conventions used in one's native language probably _do_ [color one's thinking to some extent](/2020/Dec/crossing-the-line/)—but when it comes to that, I have no reason to expect the overall design of English grammar and vocabulary "got it right" where Spanish or Russian "got it wrong." What matters isn't the specific object-level choice of pronoun or bathroom conventions; what matters is having a culture where people _viscerally care_ about minimizing the expected squared error of our probabilistic predictions, even at the expense of people's feelings—[_especially_ at the expense of people's feelings](http://zackmdavis.net/blog/2016/09/bayesomasochism/). I think looking at [our standard punching bag of theism](https://www.lesswrong.com/posts/dLL6yzZ3WKn8KaSC3/the-uniquely-awful-example-of-theism) is a very fair comparison. Religious people aren't _stupid_. You can prove theorems about the properties of [Q-learning](https://en.wikipedia.org/wiki/Q-learning) or [Kalman filters](https://en.wikipedia.org/wiki/Kalman_filter) at a world-class level without encountering anything that forces you to question whether Jesus Christ died for our sins. But [beyond technical mastery of one's narrow specialty](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory), there's going to be some competence threshold in ["seeing the correspondence of mathematical structures to What Happens in the Real World"](https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai) that _forces_ correct conclusions. I actually _don't_ think you can be a believing Christian and invent [the concern about consequentialists embedded in the Solomonoff prior](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). But the _same_ general parsimony-skill that rejects belief in an epiphenomenal "God of the gaps" that is verbally asserted to exist but will never the threat of being empirically falsified, _also_ rejects belief in an epiphenomenal "gender of the gaps" that is verbally asserted to exist but will never face the threat of being empirically falsified. In a world where sexual dimorphism didn't exist, where everyone was a hermaphrodite, then "gender" wouldn't exist, either. In a world where we _actually had_ magical perfect sex-change technology of the kind described in "Changing Emotions", then people who wanted to change sex would do so, and everyone else would use the corresponding language (pronouns and more), _not_ as a courtesy, _not_ to maximize social welfare, but because it _straightforwardly described reality_. In a world where we don't _have_ magical perfect sex-change technology, but we _do_ have hormone replacement therapy and various surgical methods, you actually end up with _four_ clusters: females, males, masculinized females (a.k.a. trans men), and feminized males (a.k.a. trans women). [TODO: explain that "I don't do policy."] The thing I'm objecting to is this _culture of narcissistic Orwellian mind games_ that thinks people have the right to _dictate other people's model of reality_. [TODO: my friends should exist, but this culture is nuts https://twitter.com/caraesten/status/1092472430465929216 > damn one extremely bad way to start my day is having the receptionist at slack say I look like a male celebrity > I'm so mad. wow. like. I look like this right now, how could anyone ever think that was an okay thing to say??? It was a complement! I don't _want_ people to have to doublethink around their perceptions of me, pretend not to notice Ziz's complaint that I'm siding with the oppressors in conceptual warfare the political incentives propagate recursively, a phase transition: in a culture it's normal for AGP males to transition, any sub-culture where they don't is subject to attack as transphobic I want to stay aligned with _actual women_, many of whom have an interest in excluding me and Ziz ] [I have seen the destiny of my neurotype, and am putting forth a convulsive effort to wrench it off its path. My weapon is clear writing.](https://www.lesswrong.com/posts/i8q4vXestDkGTFwsc/human-evil-and-muddled-thinking) My sisters! I don't hate you! I'm really jealous of you in a lot of ways, even if I'm not following the same path—not just yet, probably not in this life. But [for the protection](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect) of everything we hold sacred, _you have to let me show you what you are_.