On my reading of the text, it is _significant_ that the AI-synthesized complements for men are given their own name, the _verthandi_, rather than just being referred to as women. The _verthandi_ may _look like_ women, they may be _approximately_ psychologically human, but the _detailed_ psychology of "superintelligently-engineered optimal romantic partner for a human male" is not going to come out of the distribution of actual human females, and judicious exercise of the [tenth virtue of precision](http://yudkowsky.net/rational/virtues/) demands that a _different word_ be coined for this hypothetical science-fictional type of person. Calling the _verthandi_ "women" would be _worse writing_; it would _fail to communicate_ the impact of what has taken place in the story.
-[section: reaction to "Changing Emotions"]
+Another post in this vein that had a huge impact on me was ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). As an illustration of how [the hope for radical human enhancement is fraught with](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard) technical difficulties, the Great Teacher sketches a picture of just how difficult an actual male-to-female sex change would be.
-[section: moving to Berkeley, realized that my thing wasn't different; seemed like something that a systematically-correct-reasoning community would be interested in getting right (maybe the 30% of the ones with penises are actually women thing does fit here after all? (I was going to omit it)]
+It would be hard to overstate how much of an impact this post had on me. I've previously linked it on this blog eight times. In June 2008, half a year before it was published, I encountered the [2004 mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that was its predecessor. (The fact that I was trawling through old mailing list archives searching for content by the Great Teacher that I hadn't already read, tells you something about what a fanboy I am.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?"
+
+The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned—
+
+> Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.
+
+But from the standpoint of my secret erotic fantasy, this is actually a _great_ outcome.
+
+[...]
+
+> If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her "me". The change is too sharp, if it happens all at once.
+
+In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions/comment/4pttT7gQYLpfqCsNd)—
+
+> Is it cheating if you deliberately define your personal identity such that the answer is _No_?
+
+(To which I now realize the correct answer is: Yes, it's fucking cheating! The map is not the territory! You can't change the current _referent_ of "personal identity" with the semantic mind game of declaring that "personal identity" now refers to something else! How dumb do you think we are?! But more on this later.)
+
+[section: "50% of the ones with penises", moving to Berkeley, realized that my thing wasn't different; seemed like something that a systematically-correct-reasoning community would be interested in getting right (maybe the 30% of the ones with penises are actually women thing does fit here after all? (I was going to omit it)]
[section: had a lot of private conversations with people, and they weren't converging with me]
[section: flipped out on Facebook; those discussions ended up getting derailed on a lot of appeal-to-arbitrariness conversation halters, appeal to "Categories Were Made"]
-[section: quit my job for gender-blogging]
+So, I think this is a bad argument. But specifically, it's a bad argument for _completely general reasons that have nothing to do with gender_. And more specifically, completely general reasons that have been explained in exhaustive, _exhaustive_ detail in _our own foundational texts_—including some material that I _know_ the Popular Author is intimately familiar with, because _he fucking wrote it_.
-[...]
+[section: noncentral-fallacy / motte-and-bailey stuff, other posts about making predictions https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world ]
+
+The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [the](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/) Popular Author—you _actually know the math_.
-So, I think this is a bad argument. But specifically, it's a bad argument for _completely general reasons that have nothing to do with gender_. And more specifically, completely general reasons that have been explained in exhaustive, _exhaustive_ detail in _our own foundational texts_.
+If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, I can use the observation of its shape to update my beliefs about its blegg-category-membership, and then use my beliefs about category-membership to update my beliefs about its blueness. This cognitive algorithm is useful if we live in a world where objects that have the appropriate statistical structure—if the joint distribution P(blegg, blueness, eggness) approximately factorizes as P(blegg)·P(blueness|blegg)·P(eggness|blegg).
+
+"Category boundaries" are just a _visual metaphor_ for the math: the set of things I'll classify as a blegg with probability greater than _p_ is conveniently _visualized_ as an area with a boundary in blueness–eggness space. If you _don't understand_ the relevant math and philosophy—or are pretending not to understand only and exactly when it's politically convenient—you might think you can redraw the boundary any way you want, but you can't, because the "boundary" visualization is _derived from_ a statistical model which corresponds to _empirically testable predictions about the real world_. Fucking with category boundaries corresponds to fucking with the model, which corresponds to fucking with your ability to interpret sensory data. The only two reasons you could _possibly_ want to do this would be to wirehead yourself (corrupt your map to make the territory look nicer than it really is, making yourself _feel_ happier at the cost of sabotaging your ability to navigate the real world) or as information warfare (corrupt shared maps to sabotage other agents' ability to navigate the real world, in a way such that you benefit from their confusion).
+
+[section: started a pseudonymous secret blog; one of the things I focused on was the philosophy-of-language thing, because that seemed _really_ nailed down: "...To Make Predictions" was the crowning achievement of my sabbatical, and I was also really proud of "Reply on Adult Human Females" a few months later. And that was going OK, until ...]
+
+[section: hill of meaning in defense of validity, and I _flipped the fuck out_]
In 2008, the Great Teacher had this really amazing series of posts explaining the hidden probability-theoretic structure of language and cognition. Essentially, explaining _natural language as an AI capability_. What your brain is doing when you [see a tiger and say, "Yikes! A tiger!"](https://www.lesswrong.com/posts/dMCFk2n2ur8n62hqB/feel-the-meaning) is governed the [simple math](https://www.lesswrong.com/posts/HnPEpu5eQWkbyAJCT/the-simple-math-of-everything) by which intelligent systems make observations, use those observations to assign category-membership, and use category-membership to make predictions about properties which have not yet been observed. _Words_, language, are an information-theoretically efficient _code_ for such systems to share cognitive content.
> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace)
-Similarly, the Popular Author himself has written extensively about [the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world), which he called _the worst argument in the world_:
-
[...]
You see the problem. If "You can't define a word any way you want" is a good philosophy lesson, it should be a good philosophy lesson _independently_ of the particular word in question and _independently_ of the current year. If we've _learned something new_ about the philosophy of language in the last ten years, that's _really interesting_ and I want to know what it is!
This is _basic shit_. As we say locally, this is _basic Sequences shit_.
-[...]
-
-[section: started a pseudonymous secret blog; one of the things I focused on was the philosophy-of-language thing, because that seemed _really_ nailed down: "...To Make Predictions", "Reply on Adult Human Females". And that was going OK ...]
-
-[section: hill of meaning in defense of validity, and I _flipped the fuck out_]
+[section: being famous must suck]
+[section: email campaign that we spent a ridiculous amount of effort on]
[...]
That ended up being quite a lot of effort!—but at this point I've _exhausted every possible avenue of appeal_. Arguing [publicly on the object level](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) didn't work. Arguing [publicly on the meta level](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) didn't work. Arguing privately didn't work. There is _nothing left for me to do_ but lick my wounds, wait for my broken heart to heal, and hope that getting molecularly disassembled and turned into paperclips doesn't hurt too much.
-[section: the community is politically constrained]
+[section: and _I_ get accused of playing politics?!— everyone else shot first — 2+2]
+
+Here's what I think is going on. _After it's been pointed out_, all the actually-smart people can see that "Useful categories need to 'carve reality at the joints', and there's no reason for gender to magically be an exception to this _general_ law of cognition" is a better argument than "I can define the word 'woman' any way I want." No one is going to newly voice the Stupid Argument now that it's _known_ that I'm hanging around ready to pounce on it.
+
+But the people who have _already_ voiced the Stupid Argument can't afford to reverse themselves, even if they're the sort of _unusually_ epistemically virtuous person who publicly changes their mind on other topics. It's too politically expensive to say, "Oops, that _specific argument_ for why I support transgender people was wrong for trivial technical reasons, but I still support transgender people because ..." because political costs are imposed by a mob that isn't smart enough to understand the concept of "bad argument for a conclusion that could still be true for other reasons." So I can't be allowed to win the debate in public.
+
+The game theorist Thomas Schelling once wrote about the use of clever excuses to help one's negotiating counterparty release themselves from a prior commitment: "One must seek [...] a rationalization by which to deny oneself too great a reward from the opponent's concession, otherwise the concession will not be made."[^schelling]
-[section: and _I_ get accused of playing politics?!]
+[^schelling]: _Strategy of Conflict_, Ch. 2, "An Essay on Bargaining"
+
+This is sort of what I was trying to do when soliciting—begging for—engagement-or-endorsement of "Where to Draw the Boundaries?" I thought that it ought to be politically feasible to _just_ get public consensus from Very Important People on the _general_ philosophy-of-language issue, stripped of the politicized context that inspired my interest in it, and complete with math and examples about dolphins and job titles. That _should_ be completely safe. If some would-be troublemaker says, "Hey, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) Thus, the public record about philosophy is corrected without the VIPs having to suffer a social-justice scandal. Everyone wins, right?
+
+But I guess that's not how politics works. Somehow, the mob-punishment mechanisms that aren't smart enough to understand the concept of "bad argument for a true conclusion", _are_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs don't think they can endorse my _correct_ philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies, even though (a) that's _retarded_ (it's possible to agree with someone about a particular philosophy argument, while disagreeing with them about how the philosophy argument applies to a particular object-level case), and (b) I would have _hoped_ that explaining the abstract philosophy problem in the context of dolphins would provide enough plausible deniability to defend against _retarded people_ who want to make everything about politics.
+
+The situation I'm describing is already pretty fucked, but it would be just barely tolerable if the actually-smart people were good enough at coordinating to _privately_ settle philosophy arguments. If someone says to me, "You're right, but I can't admit this in public because it would be too politically-expensive for me," I can't say I'm not _disappointed_, but I can respect that they labor under different constraints from me.
+
+[people can't trust me to stably keep secrets]
+
+The Stupid Argument isn't just a philosophy mistake—it's a _socially load-bearing_ philosophy mistake.
+
+And _that_ is intolerable. Once you have a single socially load-bearing philosophy mistake, you don't have a systematically-correct-reasoning community anymore. What you have is a _cult_. If you _notice_ that your alleged systematically-correct-reasoning community has a load-bearing philosophy mistake, and you _go on_ acting as if it were a systematically-correct-reasoning community, then you are committing _fraud_. (Morally speaking; I don't mean a sense of the word "fraud" that could be upheld in a court of law.)
[section: "Against Lie Inflation" (and less violently https://www.lesswrong.com/posts/tSemJckYr29Gnxod2/building-intuitions-on-non-empirical-arguments-in-science ) made me scream in fury (punch the lightswitch cover), because]
-[section: the success of "Heads I Win" made me feel better; interesting how re-shared de-emphasized the political aspect]
+[section: the success of "Heads I Win" made me feel better; interesting how re-shares de-emphasized the political aspect]
+
+[section: what's next for me?]
it's naive to think you can win against an egregore 1000 times bigger than you
-MASSIVE cognitive dissonance, "What? What???"
-
the Church
won't you be embarrassed to leave if we create utopia
competence forcing conclusions: http://www.sl4.org/archive/0602/13903.html
-language as an AI capability
-
-https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions
-
-"the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other or from any other value. [...] There are many such shards of desire, all different values."
-
analogy to school
(["_Perhaps_, replied the cold logic. _If the world were at stake_. _Perhaps_, echoed the other part of himself, _but that is not what was actually happening_."](http://yudkowsky.net/other/fiction/the-sword-of-good))
If an Outer Party member in the world of George Orwell's 1984 says, "Oceania has always been at war with Eastasia," even though they clearly remember events from last week, when Oceania was at war with Eurasia instead [...] even if it's not really their fault
-
> but not worth starting over over
I mean, this is the part where I do a very not Effective Altruist-themed thing, and stop talking as if I do anything for the good of the lightcone. (Maybe see Ben on "Against Responsibility" and "The Humility Argument for Honesty".) I internalized a particular vision [...] of what conduct is appropriate to a "rationalist"; I'm didn't that standard upheld with respect to my Something to Protect; so I am doing a halt–melt–catch-fire on "the community." It's worth starting over over _for me_. If my actions (implausibly) represent a PR risk to someone else's Singularity strategy, then they're welcome to try to persuade or negotiate with me.
the appeal to arbitrariness technically extends in both directions (if there's no rule saying you can't use the word to talk about self-identity, there's no rule saying I can't use the word to talk about sex), but systematically favors one side—sex is a pretty robust abstraction, and there's no reason to deny the appeal of robustness
-https://wnww.reddit.com/r/MtF/comments/89nw0w/did_you_have_a_genderbody_swaptransformation/
-
Inadequate Equilibria!
I'm expressing the same kind of frustration as the Great Teacher complaining about cryo not being standard—my personal benchmark of "sanity" isn't realistic
Julia Serano
-You "can't" define a word any way you want, or you "can"—what actually matters is the math
-
words don't have intrinsic definitions, but the only reason you would want to repurpose an _existing_ word is either becasuse you think you can carve the joints better, or mindfucking
cat/dog gaslighting; even if you don't particularly need that particular classification for a practical purpose, even so ...
----
-As an illustration of how [the hope for radical human enhancement is fraught with](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard) technical difficulties, ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) sketches a picture of just how difficult an actual male-to-female sex change would be.
-
-It would be hard to overstate how much of an impact this post had on me. I've previously linked it on this blog eight times. In June 2008, half a year before it was published, I encountered the [2004 mailing list post](http://lists.extropy.org/pipermail/extropy-chat/2004-September/008924.html) that was its predecessor. (The fact that I was trawling through old mailing list archives searching for content by the Great Teacher that I hadn't already read, tells you something about what a fanboy I am.) I immediately wrote to a friend: "[...] I cannot adequately talk about my feelings. Am I shocked, liberated, relieved, scared, angry, amused?"
-
-The argument goes: it might be easy to _imagine_ changing sex and refer to the idea in a short English sentence, but the real physical world has implementation details, and the implementation details aren't filled in by the short English sentence. The human body, including the brain, is an enormously complex integrated organism; there's no [plug-and-play](https://en.wikipedia.org/wiki/Plug_and_play) architecture by which you can just swap your brain into a new body and have everything work without re-mapping the connections in your motor cortex. And even that's not _really_ a sex change, as far as the whole integrated system is concerned—
-
-> Remapping the connections from the remapped somatic areas to the pleasure center will ... give you a vagina-shaped penis, more or less. That doesn't make you a woman. You'd still be attracted to girls, and no, that would not make you a lesbian; it would make you a normal, masculine man wearing a female body like a suit of clothing.
-
-But from the standpoint of my secret erotic fantasy, this is actually a _great_ outcome.
-
-[...]
-
-> If I fell asleep and woke up as a true woman—not in body, but in brain—I don't think I'd call her "me". The change is too sharp, if it happens all at once.
-
-In the comments, [I wrote](https://www.greaterwrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions/comment/4pttT7gQYLpfqCsNd)—
-
-> Is it cheating if you deliberately define your personal identity such that the answer is _No_?
-
-To which I now realize the correct answer is: Yes, it's fucking cheating! The map is not the territory! You can't change the current _referent_ of "personal identity" with the semantic mind game of declaring that "personal identity" now refers to something else! How dumb do you think we are?! (But more on this later.)
-
------
-
Men who wish they were women do not particularly resemble actual women! We just—don't? This seems kind of obvious, really? Telling the difference between fantasy and reality is kind of an important life skill?
Okay, I understand that in Berkeley 2020, that probably sounds like some kind of reactionary political statement, probably intended to provoke. But try interpreting it _literally_, as a _factual claim_ about the world. Adult human males who _fantasize about_ being adult human females, are still neverless drawn from the _male_ multivariate trait distribution, not the female distribution.
----
-Some readers who aren't part of my robot cult—and some who are—might be puzzled at why I was so emotionally disturbed by people being wrong about philosophy. And for almost anyone else in the world, I would just shrug and [set the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to).
+Some readers who aren't part of my robot cult—and some who are—might be puzzled at why I've been _so freaked out_ for _an entire year_ by people being wrong about philosophy. And for almost anyone else in the world, I would just shrug and [set the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to).
Even people who aren't religious still have the same [species-typical psychological mechanisms](https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind) that make religions work. The systematically-correct-reasoning community had come to fill a [similar niche in my psychology as a religious community](https://www.lesswrong.com/posts/p5DmraxDmhvMoZx8J/church-vs-taskforce). I knew this, but the _hope_ was that this wouldn't come with the pathologies of a religion, because our pseudo-religion was _about_ the rules of systematically correct reasoning. The system is _supposed_ to be self-correcting: if people are obviously, _demonstratably_ wrong, all you have to do is show them the argument that they're wrong, and then they'll understand the obvious argument and change their minds.
[...]
+MASSIVE cognitive dissonance, "What? What???"
+
This is my fault. It's [not like we weren't warned](https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult).
----
But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_.
-If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where the community doesn't exist at all—
+If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where the community doesn't exist at all.
-----
+[Insert this after first mention of Great Teacher/Popular Author]
+
I'm avoiding naming anyone in this post even when linking to their public writings, in order to try to keep the _rhetorical emphasis_ on "true tale of personal heartbreak, coupled with sober analysis of the sociopolitical factors leading thereto" even while I'm ... expressing disappointment with people's performance. This isn't supposed to be character/reputational attack on my friends and (former??) heroes—at least, not more than it needs to be. I just _need to tell the story_.
I'd almost rather we all pretend this narrative was written in a ["nearby" Everett branch](https://www.lesswrong.com/posts/9cgBF6BQ2TRB3Hy4E/and-the-winner-is-many-worlds) whose history diverged from ours maybe forty-five years ago—a world almost exactly like our own as far as the macro-scale institutional and ideological forces at play, but with different individual people filling out the relevant birth cohorts. _My_ specific identity doesn't matter; the specific identities of any individuals I mention while telling my story don't matter. What matters is the _structure_: I'm just a sample from the _distribution_ of what happens when an American upper-middle-class high-Openness high-Neuroticism late-1980s-birth-cohort IQ-130 78%-Ashkenazi obligate-autogynephilic boy falls in with this kind of robot cult in this kind of world.
Note, **(3) is _entirely compatible_ with trans women being women**. The point is that if you want to claim that trans women are women, you need some sort of _argument_ for why that categorization makes sense in the context you want to use the word—why that map usefully reflects some relevant aspect of the territory. If you want to _argue_ that hormone replacement therapy constitutes an effective sex change, or that trans is a brain-intersex condition and the brain is the true referent of "gender", or that [coordination constraints on _shared_ categories](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests) [support the self-identification criterion](/2019/Oct/self-identity-is-a-schelling-point/), that's fine, because those are _arguments_ that someone who initially disagreed with your categorization could _engage with on the merits_. In contrast, "I can define a word any way I want" can't be engaged with in the same way because it's a denial of the possibility of merits.
-------
-
-Here's what I think is going on. _After it's been pointed out_, all the actually-smart people can see that "Useful categories need to 'carve reality at the joints', and there's no reason for gender to magically be an exception to this _general_ law of cognition" is a better argument than "I can define the word 'woman' any way I want." No one is going to newly voice the Stupid Argument now that it's _known_ that I'm hanging around ready to pounce on it.
-
-But the people who have _already_ voiced the Stupid Argument can't afford to reverse themselves, even if they're the sort of _unusually_ epistemically virtuous person who publicly changes their mind on other topics. It's too politically expensive to say, "Oops, that _specific argument_ for why I support transgender people was wrong for trivial technical reasons, but I still support transgender people because ..." because political costs are imposed by a mob that isn't smart enough to understand the concept of "bad argument for a conclusion that could still be true for other reasons." So I can't be allowed to win the debate in public.
-
-The game theorist Thomas Schelling once wrote about the use of clever excuses to help one's negotiating counterparty release themselves from a prior commitment: "One must seek [...] a rationalization by which to deny oneself too great a reward from the opponent's concession, otherwise the concession will not be made."[^schelling]
-
-[^schelling]: _Strategy of Conflict_, Ch. 2, "An Essay on Bargaining"
-
-This is sort of what I was trying to do when soliciting—begging for—engagement-or-endorsement of "Where to Draw the Boundaries?" I thought that it ought to be politically feasible to _just_ get public consensus from Very Important People on the _general_ philosophy-of-language issue, stripped of the politicized context that inspired my interest in it, and complete with math and examples about dolphins and job titles. That _should_ be completely safe. If some would-be troublemaker says, "Hey, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) Thus, the public record about philosophy is corrected without the VIPs having to suffer a social-justice scandal. Everyone wins, right?
-
-But I guess that's not how politics works. Somehow, the mob-punishment mechanisms that aren't smart enough to understand the concept of "bad argument for a true conclusion", _are_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs don't think they can endorse my _correct_ philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies, even though (a) that's _retarded_ (it's possible to agree with someone about a particular philosophy argument, while disagreeing with them about how the philosophy argument applies to a particular object-level case), and (b) I would have _hoped_ that explaining the abstract philosophy problem in the context of dolphins would provide enough plausible deniability to defend against _retarded people_ who want to make everything about politics.
-
-The situation I'm describing is already pretty fucked, but it would be just barely tolerable if the actually-smart people were good enough at coordinating to _privately_ settle philosophy arguments. If someone says to me, "You're right, but I can't admit this in public because it would be too politically-expensive for me," I can't say I'm not _disappointed_, but I can respect that they labor under constraints
-
-[people can't trust me to stably keep secrets]
-
-The Stupid Argument isn't just a philosophy mistake—it's a _socially load-bearing_ philosophy mistake.
-
-And _that_ is intolerable. Once you have a single socially load-bearing philosophy mistake, you don't have a systematically-correct-reasoning community anymore. What you have is a _cult_. If you _notice_ that your alleged systematically-correct-reasoning community has a load-bearing philosophy mistake, and you _go on_ acting as if it were a systematically-correct-reasoning community, then you are committing _fraud_. (Morally speaking. I don't mean a sense of the word "fraud" that could be upheld in a court of law.)
-
----
[trade arrangments: if that's the world we live in, fine]
The Popular Author obviously never wanted to be the center of a personality cult; it just happened to him anyway because he's better at writing than everyone else.
-----
-
-The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [the](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/) Popular Author—you _actually know the math_.
-
-If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, I can use the observation of its shape to update my beliefs about its blegg-category-membership, and then use my beliefs about category-membership to update my beliefs about its blueness. This cognitive algorithm is useful if we live in a world where objects that have the appropriate statistical structure—if the joint distribution P(blegg, blueness, eggness) approximately factorizes as P(blegg)·P(blueness|blegg)·P(eggness|blegg).
-
-"Category boundaries" are just a _visual metaphor_ for the math: the set of things I'll classify as a blegg with probability greater than _p_ is conveniently _visualized_ as an area with a boundary in blueness–eggness space. If you _don't understand_ the relevant math and philosophy—or are pretending not to understand only and exactly when it's politically convenient—you might think you can redraw the boundary any way you want, but you can't, because the "boundary" visualization is _derived from_ a statistical model which corresponds to _empirically testable predictions about the real world_. Fucking with category boundaries corresponds to fucking with the model, which corresponds to fucking with your ability to interpret sensory data. The only two reasons you could _possibly_ want to do this would be to wirehead yourself (corrupt your map to make the territory look nicer than it really is, making yourself _feel_ happier at the cost of sabotaging your ability to navigate the real world) or as information warfare (corrupt shared maps to sabotage other agents' ability to navigate the real world, in a way such that you benefit from their confusion).
-
-----
In sexually-reproducing species, [complex functional adaptations in are necessarily species-universal _up to sex_](https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind), because adaptations have to evolve incrementally: you don't have selection pressure for an allele for a ever-so-slightly-improved eye, until all the pieces for the unimproved eye are already at fixation and won't get immediately [reshuffled during meiosis](https://en.wikipedia.org/wiki/Chromosomal_crossover) in the next generation.
(That is: evolutionary psychology is impressively anti-racist, but _super_ sexist.)
+https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions
+
+"the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other or from any other value. [...] There are many such shards of desire, all different values."
+
----
So far, I've mostly been linking to [Anne Lawrence](http://www.annelawrence.com/autogynephilia_&_MtF_typology.html) or [Kay Brown](https://sillyolme.wordpress.com/faq-on-the-science/) for the evidence for this rather than writing up my own take (I already have enough problems with writing quickly, that I don't feel motivated to spend wordcount making a case that other people have already made), but maybe that was a tactical mistake on my part, because people don't click links, and so if I don't include at least _some_ of the evidence inline in my own text, hostile readers (that's you!) will write me off as making unjustified assertions.
-----
["delusional perverts", no one understands me]
+
+-----
+
+[You "can't" define a word any way you want, or you "can"—what actually matters is the math]