-If I have some objective inner female gender as the result of a brain-intersex condition, then getting on, and _staying_ on, feminizing hormone replacement therapy (HRT) would presumably be a good idea specifically because my brain is designed to "run on" estrogen. But if my beautiful pure sacred self-identity feelings are fundamentally a misinterpretation of misdirected _male_ sexuality, then it's not clear that I _want_ the psychological effects of HRT: if there were some unnatural way to give me a female body (or just more female-_like_) _without_ messing with my internal neurochemistry, that would actually be _desireable_.
-
-Or, you might think that if the desire is just a confusion in male sexuality, maybe real life body-modding _wouldn't_ be desirable? Maybe autogynephilic men _think_ they want female bodies, but if they actually transitioned in real life (as opposed to just having incompetently non-realistic daydreams about it all day and especially while masturbating), they would feel super-dysphoric about it, because (and which proves that) they're just perverted men, and not actual trans women, which are a different thing. You might think so!
-
-But, empirically, I did grow (small) breasts as a result of [my five-month HRT experiment](/2017/Sep/hormones-day-156-developments-doubts-and-pulling-the-plug-or-putting-the-cis-in-decision/), and I think it's actually been a (small) quality-of-life improvement for approximately the reasons I expected going in. I just—like the æsthetic?—and wanted it to be part of _my_ æsthetic, and now it is, and I don't quite remember what my chest was like before, kind of like how I don't quite remember what it was like to have boy-short hair before I grew out my signature beautiful–beautiful ponytail. (Though I'm _still_ [kicking myself for not](/2017/Nov/laser-1/) taking a bare-chested "before" photo.) I don't see any particular reason to believe this experience wouldn't replicate all the way down the [slope of interventions](/2017/Jan/the-line-in-the-sand-or-my-slippery-slope-anchoring-action-plan/).
-
-Fundamentally, I think I'm making _better decisions_ for myself by virtue of having an accurate model of what's actually going on with me—a model that uses all these fine mental distinctions using the everything-is-a-vector-space skill, such that I can talk about my paraphilic desire to be shaped like a woman without wanting to actually be a woman, similarly to how the _verthandi_ in "Failed Utopia #4-2" aren't actually women.
-
-If the _actual_ desire implemented in one's actual brain in the real physical universe takes the form of (roughly translating from desire into English) "You know, I kind of want my own breasts (_&c._)", it may be weird and perverted to _admit_ this and act on it (!!)—but would it be any _less_ weird and perverted to act on it under the false (in my case) pretense of an invisible female gender identity? If you know what the thing is, can it be any worse to just _own it_?
-
-[TODO: smoother transition to the discussion of personal identity; my old view is that gender identity is sexist (because psych. sex differences are fake/minimal); my new view is that brain sex is real and that I'm male]
-
-In "Changing Emotions", Yudkowsky wrote—
-
-> If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her "me". The change is too sharp, if it happens all at once.
-
-In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions/comment/4pttT7gQYLpfqCsNd)—
-
-> Is it cheating if you deliberately define your personal identity such that the answer is _No_?
-
-To which I now realize the correct answer is—_yes!_ Yes, it's cheating! Category-membership claims of the form "X is a Y" [represent hidden probabilistic inferences](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences); inferring that entity X is a member of category Y means [using observations about X to decide to use knowledge about members of Y to make predictions about features of X that you haven't observed yet](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes). But this AI trick can only _work_ if the entities you've assigned to category Y are _actually_ similar—if they form a tight cluster in configuration space, such that using the center of the cluster to make predictions about unobserved features gets you _close_ to the right answer, on average.
-
-The rules don't change when the entity X happens to be "my female analogue" and the category Y happens to be "me". The ordinary concept of "personal identity" tracks how the high-level features of individual human organisms are stable over time. You're going to want to model me-on-Monday and me-on-Thursday as "the same" person even if my Thursday-self woke up on the wrong side of bed and has three whole days of new memories. When interacting with my Thursday-self, you're going to be using your existing mental model of me, plus a diff for "He's grumpy" and "Haven't seen him in three days"—but that's a _very small_ diff, compared to the diff between me and some other specific person you know, or the diff between me and a generic human who you don't know.
-
-In everyday life, we're almost never in doubt as to which entities we want to consider "the same" person (like me-on-Monday and me-on-Thursday), but we can concoct science-fictional thought experiments that force [the Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) to come up. What if you could "merge" two people—construct a human with a personality "in between" yours and mine, that had both of our memories? (You know, like [Tuvix](https://memory-alpha.fandom.com/wiki/Tuvix_(episode)).) Would that person be me, or you, or both, or neither? (Derek Parfit has [a book](https://en.wikipedia.org/wiki/Reasons_and_Persons#Personal_identity) with lots of these.)
-
-[TODO: change scenario to interpolate between people, _at what point_ does it become]
-
-People _do_ change a lot over time; there _is_ a sense in which, in some contexts, we _don't_ want to say that a sixty-year-old is the "same person" they were when they were twenty—and forty years is "only" 4,870 three-day increments. But if a twenty-year-old were to be magically replaced with their sixty-year-old future self (not just superficially wearing an older body like a suit of clothing, but their brain actually encoding forty more years of experience and decay) ... well, there's a reason I reached for the word "replace" (suggesting putting a _different_ thing in something's place) when describing the scenario. That's what Yudkowsky means by "the change is too sharp"—the _ordinary_ sense in which we model people as the "same person" from day to day (despite people having [more than one proton](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) in a different place from day to day) has an implicit [Lipschitz condition](https://en.wikipedia.org/wiki/Lipschitz_continuity) buried in it.
-
-The thing about Sorites problems is that they're _incredibly boring_. The map is not the territory. The distribution of sand-configurations we face in everyday life is such that we usually have an answer as to whether the sand "is a heap" or "is not a heap", but in the edge-cases where we're not sure, arguing about whether to use the word "heap" _doesn't change the configuration of sand_. You might think that if [the category is blurry](https://www.lesswrong.com/posts/dLJv2CoRCgeC2mPgj/the-fallacy-of-gray), you therefore have some freedom to [draw its boundaries](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) the way you prefer—but [the cognitive function of the category is for making probabilistic inferences on the basis of category-membership](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries), and those probabilistic inferences can be quantitatively better or worse. Preferences over concept definitions that aren't about maximizing predictive accuracy are therefore preferences _for deception_, because "making probability distributions less accurate in order to achieve some other goal" is exactly what _deception_ means.
-
-That's why defining your personal identity to get the answer you want is cheating. If the answer you wanted was actually _true_, you could just say so without needing to _want_ it.
-
-When [Phineas Gage's](/2017/Dec/interlude-xi/) friends [said he was "no longer Gage"](https://en.wikipedia.org/wiki/Phineas_Gage) after the railroad accident, what they were trying to say was that interacting with post-accident Gage was _more relevantly similar_ to interacting with a stranger than it was to interacting with pre-accident Gage, even if Gage-the-physical-organism was contiguous along the whole strech of space time.