+By this I don't mean that the _content_ of Yudkowskian rationalism is much comparable to Christianity or Buddhism. But whether or not there is a God or a Divine (there is not), the _features of human psychology_ that make Christianity or Buddhism adaptive memeplexes are still going to be active. If the God-shaped whole in my head can't not be filled by _something_, it's better to fill it with a "religion" _about good epistemology_, one that can _reflect_ on the fact that beliefs that are adaptive memeplexes are not therefore true, and Yudkowsky's writings on the hidden Bayesian structure of the universe were a potent way to do that. It seems fair to compare my tendency to write in Sequences links to a devout Christian's tendency to quote Scripture by chapter and verse; the underlying mental motion of "appeal to the holy text" is probably pretty similar. My only defense is that _my_ religion is _actually true_ (and that my religion says you should read the texts and think it through for yourself, rather than taking anything on "faith").
+
+That's the context in which my happy-price email thread ended up including the sentence, "I feel awful writing _Eliezer Yudkowsky_ about this, because my interactions with you probably have disproportionately more simulation-measure than the rest of my life, and do I _really_ want to spend that on _this topic_?" (Referring to the idea that, in a sufficiently large universe where many subjectively-indistinguishable copies of everyone exists, including inside of future superintelligences running simulations of the past, there would plausibly be _more_ copies of my interactions with Yudkowsky than of other moments of my life, on account of that information being of greater decision-relevance to those superintelligences.)
+
+I say all this to emphasize just how much Yudkowsky's opinion meant to me. If you were a devout Catholic, and something in the Pope's latest encyclical seemed wrong according to your understanding of Scripture, and you had the opportunity to talk it over with the Pope for a measly $1000, wouldn't you take it? Of course you would!
+
+Anyway, I can't talk about the results of my happy price inquiry (whether he accepted the offer and a conversation occured, or what was said if it did occur), because I think the rule I should follow for telling this Whole Dumb Story is that while I have complete freedom to talk about _my_ actions and things that happened in public, I'm not allowed to divulge information about what Yudkowsky may or may not have said in private conversations that may or may not have occured, because even without an explicit secrecy promise, people might be less forthcoming in private conversations if they knew that you might blog about them later. Personally, I think most people are _way_ too paranoid about this, and often wish I could just say what relevant things I know without worrying about whether it might infringe on someone's "privacy", but I'm eager to cooperate with widely-held norms even if I personally think they're dumb.
+
+(Incidentally, it was also around this time that I snuck a copy of _Men Trapped in Men's Bodies_ into the [MIRI](https://intelligence.org/) office library, which was sometimes possible for community members to visit. It seemed like something Harry Potter-Evans-Verres would do—and ominously, I noticed, not like something Hermione Granger would do.)
+
+[TODO: Scott linked to Kay Brown as part of his links post and got pushback
+https://slatestarcodex.com/2016/11/01/links-1116-site-unseen/
+https://slatestarscratchpad.tumblr.com/post/152736458066/hey-scott-im-a-bit-of-a-fan-of-yours-and-i]
+
+[TODO: I posted to /r/gendercritical (post the full text in an ancillary page; it's currently in my "Collective Debt, Collective Shame" draft)
+
+The first comment was "You are a predator." ... I'm not sure what I was expecting. I spent part of Christmas Day crying.]
+
+Gatekeeping sessions finished, I finally started HRT at the end of December 2016. In an effort to not let my anti–autogynephilia-denialism crusade take over my life, earlier that month, I [promised myself](/ancillary/a-broken-promise/) (and [published the SHA256 hash of the promise](https://www.facebook.com/zmdavis/posts/10154596054540199) to signal that I was Serious) not to comment on gender issues under my real name through June 2017—_that_ was what my new pseudonymous blog was for.
+
+... the promise didn't take. There was just too much gender-identity nonsense on my Facebook feed; I _had_ to push back on some of it.
+
+"Folks, I'm not sure it's feasible to have an intellectually-honest real-name public conversation about the etiology of MtF," I wrote in one thread. "If no one is willing to mention some of the key relevant facts, maybe it's less misleading to just say nothing."
+
+As a result of that, I got a PM from a woman whose marriage had fallen apart after (among other things) her husband transitioned. She told me about the parts of her husband's story that had never quite made sense to her (but which sounded like a textbook case from my reading). In her telling, the husband was always more emotionally tentative and less comfortable with the standard gender role and status stuff, but in the way of like, a geeky nerd guy, not in the way of someone feminine. He was into crossdressing sometimes, but she had thought that was just a weird and insignificant kink, not that he didn't like being a man—until they moved to the Bay Area and he fell in with a social-justicey crowd. When I linked her to Kay Brown's article on ["Advice for Wives and Girlfriends of Autogynephiles"](https://sillyolme.wordpress.com/advice-for-wivesgirlfriends-of-autogynephiles/), her response was, "Holy shit, this is _exactly_ what happened with me." It was nice to make a friend over shared heresy.
+
+[TODO: the story of my Facebook crusade, going off the rails, getting hospitalized
+/2017/Mar/fresh-princess/
+/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/
+]
+
+A striking pattern from my attempts to argue with people about the two-type taxonomy was the tendency for the conversation to get derailed on some variation of "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a 2014 post by Scott Alexander arguing that because categories exist in our model of the world rather than the world itself, there's nothing wrong with simply _defining_ trans people to be their preferred gender, in order to alleviate their dysphoria.
+
+This ... really wasn't what I was trying to talk about. _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory of psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words.
+
+Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—[not just as an obligatory profession of humility, but _actually_ wrong in the real world](https://www.lesswrong.com/posts/GrDqnMjhqoxiqpQPw/the-proper-use-of-humility). If my fellow rationalists weren't sold on the autogynephilia and transgender thing, I might be a bit disappointed, but it's definitely not grounds to denounce the entire community as a failure or a fraud.
+
+But this "I can define the word _woman_ any way I want" mind game? _That_ part was _absolutely_ clear-cut. That part of the argument, I knew I could win.
+
+To be clear, it's _true_ that categories exist in our model of the world, rather than the world itself—the "map", not the "territory"—and it's true that trans women might be women _with respect to_ some genuinely useful definition of the word "woman." However, the Scott Alexander piece that people kept linking to me goes further, claiming that we can redefine gender categories _in order to make trans people feel better_:
+
+> I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.
+
+But this is just wrong. Categories exist in our model of the world _in order to_ capture empirical regularities in the world itself: the map is supposed to _reflect_ the territory, and there _are_ "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight. Alexander cites [a post](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside) from that Sequence in support of the (true) point about how categories are "in the map" ... but if you actually read the Sequence, another point that Yudkowsky pounds home _over and over and over again_, is that word and category definitions are nevertheless _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"—
+
+> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)
+
+> ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
+
+> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)
+
+> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)
+
+> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences)
+
+> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations)
+
+> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words)
+
+> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes)
+
+> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace)