In ["Interpersonal Entanglement"](https://www.lesswrong.com/posts/Py3uGnncqXuEfPtQp/interpersonal-entanglement), Yudkowsky appeals to the complex moral value of sympathy as an argument against the desireability of nonsentient sex partners (_catgirls_ being the technical term). Being emotionally intertwined with another actual person is one of the things that makes life valuable, that would be lost if people just had their needs met by soulless catgirl holodeck characters.
-But there's a problem, Yudkowsky argues: women and men aren't designed to make each other optimally happy. The abstract game between the two human life-history strategies in the environment of evolutionary adaptedness had a conflicting-interests as well as a shared-interests component, and human psychology still bears the design signature of that game denominated in inclusive fitness, even though [no one cares about inclusive fitness](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers).
-
-(Peter Watts: ["And God smiled, for Its commandment had put Sperm and Egg at war with each other, even unto the day they made themselves obsolete."](https://www.rifters.com/real/Blindsight.htm))
-
-The secnario of Total Victory for the ♂ player in the conflicting-interests subgame is not Nash. The design of the entity who _optimally_ satisfied what men want out of women would not be, and _could_ not be, within the design parameters of actual women.
+But there's a problem, Yudkowsky argues: women and men aren't designed to make each other optimally happy. The abstract game between the two human life-history strategies in the environment of evolutionary adaptedness had a conflicting-interests as well as a shared-interests component, and human psychology still bears the design signature of that game denominated in inclusive fitness, even though [no one cares about inclusive fitness](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers). (Peter Watts: ["And God smiled, for Its commandment had put Sperm and Egg at war with each other, even unto the day they made themselves obsolete."](https://www.rifters.com/real/Blindsight.htm)) The secnario of Total Victory for the ♂ player in the conflicting-interests subgame is not [Nash](https://en.wikipedia.org/wiki/Nash_equilibrium). The design of the entity who _optimally_ satisfied what men want out of women would not be, and _could_ not be, within the design parameters of actual women.
(And _vice versa_ and respectively, but in case you didn't notice, this blog post is all about male needs.)
Of course no one _wants_ that—our male protagonist doesn't _want_ to abandon his wife and daughter for some catgirl-adjacent (if conscious) hussy. But humans _do_ adapt to loss; if the separation were already accomplished by force, people would eventually move on, and post-separation life with companions superintelligently optimized _for you_ would ([_arguendo_](https://en.wikipedia.org/wiki/Arguendo)) be happier than life with your real friends and family, whose goals will sometimes come into conflict with yours because they weren't superintelligently designed _for you_.
-The alignment-theory morals are those of [unforseen maxima](https://arbital.greaterwrong.com/p/unforeseen_maximum) and [edge instantiation](https://arbital.greaterwrong.com/p/edge_instantiation). An AI designed to maximize happiness would kill all humans and tile the galaxy with maximally-efficient happiness-brainware. If this sounds "crazy" to you, that's the problem with anthropomorphism I was telling you about: don't imagine "AI" as an unemotional human, just think about a machine that calculates what actions would result in what outcomes, and does the action that would result in the outcome that maximizes some function. It turns out that picking a function that doesn't kill everyone looks hard. Just tacking on the constaints that you can think of (make the _existing_ humans happy without tampering with their minds) [will tend to produce similar "crazy" outcomes that you didn't think to exclude](https://arbital.greaterwrong.com/p/nearest_unblocked).
+The alignment-theory morals are those of [unforseen maxima](https://arbital.greaterwrong.com/p/unforeseen_maximum) and [edge instantiation](https://arbital.greaterwrong.com/p/edge_instantiation). An AI designed to maximize happiness would kill all humans and tile the galaxy with maximally-efficient happiness-brainware. If this sounds "crazy" to you, that's the problem with anthropomorphism I was telling you about: [don't imagine "AI" as an emotionally-repressed human](https://www.lesswrong.com/posts/zrGzan92SxP27LWP9/points-of-departure), just think about [a machine that calculates what actions would result in what outcomes](https://web.archive.org/web/20071013171416/http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/), and does the action that would result in the outcome that maximizes some function. It turns out that picking a function that doesn't kill everyone looks hard. Just tacking on the constaints that you can think of (make the _existing_ humans happy without tampering with their minds) [will tend to produce similar "crazy" outcomes that you didn't think to exclude](https://arbital.greaterwrong.com/p/nearest_unblocked).
At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at "Failed Utopia #4-2" in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back a dozen years later—[or even four years later](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/D34jhYBcaoE7DEb8d)—my performative horror was missing the point.
_ Comment on "One Size Does Not Fit All: In Support of Psychotherapy for Gender Dysphoria" https://link.springer.com/article/10.1007/s10508-020-01844-2 (tag: `review (paper)`)
_ the scapegoating dynamic
_ review of "What's the Big Secret?" (acknowledges gender-role conventions which are more salient when people are wearing clothes, but then tells the real answer)
-
+_ reply to https://thingofthings.wordpress.com/2020/11/16/hermeneutical-injustice-not-gaslighting/
optimized to confuse and intimidate people trying to use language to reason about the concept of biological sex, even if your conscious verbal narrative never says 'and now I will confuse and intimidate people who want to use language to reason about the concept of biological sex'!"
Research notes—
https://femalesexualinversion.blogspot.com/2020/12/the-problem-with-puberty-blockers-part.html
-There was that time when Merlin wanted to see the medicines on the shelf and I was like, "Aw, why do you need to know this anyway" and Elena was like, "He's curious"—people don't want to be blamed for hurting the child, and if you're living in an ideological bubble where it's presumed that telling the child the truth about what sex they are
+There was that time when M. wanted to see the medicines on the shelf and I was like, "Aw, why do you need to know this anyway" and E. was like, "He's curious"—people don't want to be blamed for hurting the child, and if you're living in an ideological bubble where it's presumed that telling the child the truth about what sex they are
+E. on "Maybe C. is very competitive, and that's why she likes fighting, because it's something you can win". It's amusing that we have to posit that as an individual trait, whereas normies are allowed to say and think "duh, boys like fighting"