(I guess I can't evade responsibility for the fact that I am, in fact, blogging about this)
-A clue: when I'm masturbating, and imagining all the forms I would take if the magical transformation technology were real (the frame story can vary, but the basic idea is always the same), I don't think I'm very _good_ at first-person visualization? The _content_ of the fantasy is about _me_ being a woman (I mean, having a woman's body), but the associated mental imagery mostly isn't the first-person perspective I would actually experience if the fantasy were real; I'm mostly imagining a specific woman (which varies a lot) from the outside, admiring her face, and her voice, and her breasts, but wanting the soul behind those eyes to be _me_. Wanting _my_ body to be shaped like _that_, to be control of that avatar of beauty.
+A clue: when I'm masturbating, and imagining all the forms I would take if the magical transformation technology were real (the frame story can vary, but the basic idea is always the same), I don't think I'm very _good_ at first-person visualization? The _content_ of the fantasy is about _me_ being a woman (I mean, having a woman's body), but the associated mental imagery mostly isn't the first-person perspective I would actually experience if the fantasy were real; I'm mostly imagining a specific woman (which one, varies a lot) from the outside, admiring her face, and her voice, and her breasts, but wanting the soul behind those eyes to be _me_. Wanting _my_ body to be shaped like _that_, to be control of that avatar of beauty.
If the magical transformation technology were real, I would want a mirror. (And in the real world, I would probably crossdress a _lot_ more often, if I could pass to myself in the mirror.)
> Is it cheating if you deliberately define your personal identity such that the answer is _No_?
-To which I now realize the correct answer is—_yes!_ Yes, it's cheating! The map is not the territory: claims of the form "X is a Y" [represent hidden probabilistic inferences](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences); inferring that entity X is a member of category Y means [using observations about X to decide to use knowledge about members of Y to make predictions about features of X that you haven't observed yet](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes). But this AI trick can only _work_ if the entities you've assigned to category Y are _actually_ similar—if they form a tight cluster in configuration space, such that using the center of the cluster to make predictions about unobserved features gets you _close_ to the right answer, on average.
+To which I now realize the correct answer is—_yes!_ Yes, it's cheating! Category-membership claims of the form "X is a Y" [represent hidden probabilistic inferences](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences); inferring that entity X is a member of category Y means [using observations about X to decide to use knowledge about members of Y to make predictions about features of X that you haven't observed yet](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes). But this AI trick can only _work_ if the entities you've assigned to category Y are _actually_ similar—if they form a tight cluster in configuration space, such that using the center of the cluster to make predictions about unobserved features gets you _close_ to the right answer, on average.
+
+The rules don't change when the entity X happens to be "my female analogue" and the category Y happens to be "me". The ordinary concept of "personal identity" tracks how the high-level features of individual human organisms are stable over time. You're going to want to model me-on-Monday and me-on-Thursday as "the same" person even if my Thursday-self woke up on the wrong side of bed and has three whole days of new memories. When interacting with my Thursday-self, you're going to be using your existing mental model of me, plus a diff for "He's grumpy" and "Haven't seen him in three days"—but that's a _very small_ diff, compared to the diff between me and some other specific person you know, or the diff between me and a generic human who you don't know.
+
+In everyday life, we're almost never in doubt as to which entities we want to consider "the same" person, but we can concoct science-fictional thought experiments that force [the Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) to come up. What if you could "merge" two people—construct a human with a personality "in between" yours and mine, that had both of our memories? (You know, like [Tuvix](https://memory-alpha.fandom.com/wiki/Tuvix_(episode)).) Would that person be me, or you, or both, or neither? (Derek Parfit has [a whole _book_](https://en.wikipedia.org/wiki/Reasons_and_Persons#Personal_identity) full of these.)
+
+The thing about Sorites problems is that [they're _incredibly boring_](https://www.lesswrong.com/posts/dLJv2CoRCgeC2mPgj/the-fallacy-of-gray). The map is not the territory. The distribution of sand-configurations we face in everyday life is such that we usually have an answer as to whether the sand "is a heap" or "is not a heap."
+
+
+
+That's what [Phineas Gage's](/2017/Dec/interlude-xi/) friends meant when [they said he was "no longer Gage"](https://en.wikipedia.org/wiki/Phineas_Gage) after the railroad accident.
+
+
+The map is not the territory:
https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries