-[TODO: the point it about unforeseen maximum, and for the purposes of a dramatic story, it's OK to focus on the big separating hyperplane, even if there are many other hyperplanes
-https://arbital.greaterwrong.com/p/unforeseen_maximum
-]
-[TODO: skeptical commenters saying that this isn't what they want are missing the point: you would adapt]
-[TODO: "Interpersonal Entanglement" suggests a negotiation]
+Yudkowsky dramatized the implications in a short story, ["Failed Utopia #4-2"](https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2), portraying an almost-aligned superintelligence constructing a happiness-maximizing utopia for humans—except that because of the mismatch in the sexes' desires, and because the AI is prohibited from editing people's minds, the happiness-maximizing solution (according to the story) turns out to be splitting up the human species by sex and giving women and men their own _separate_ utopias (on [Venus and Mars](https://en.wikipedia.org/wiki/Gender_symbol#Origins), ha ha), complete with artificially-synthesized romantic partners.
+
+Of course no one _wants_ that—our male protagonist doesn't _want_ to abandon his wife and daughter for some catgirl-adjacent (if conscious) hussy. But humans _do_ adapt to loss; if the separation were already accomplished by force, people would eventually move on, and post-separation life with companions superintelligently optimized _for you_ would ([_arguendo_](https://en.wikipedia.org/wiki/Arguendo)) be happier than life with your real friends and family, whose goals will sometimes come into conflict with yours because they weren't superintelligently designed _for you_.
+
+The alignment-theory morals are those of [unforseen maxima](https://arbital.greaterwrong.com/p/unforeseen_maximum) and [edge instantiation](https://arbital.greaterwrong.com/p/edge_instantiation). An AI designed to maximize happiness would kill all humans and tile the galaxy with maximally-efficient happiness-brainware. If this sounds "crazy" to you, that's the problem with anthropomorphism I was telling you about: don't imagine "AI" as an unemotional human, just think about a machine that calculates what actions would result in what outcomes, and does the action that would result in the outcome that maximizes some function. It turns out that picking a function that doesn't kill everyone looks hard. Just tacking on the constaints that you can think of (make the _existing_ humans happy without tampering with their minds) [will tend to produce similar "crazy" outcomes that you didn't think to exclude](https://arbital.greaterwrong.com/p/nearest_unblocked).
+
+At the time, [I expressed horror](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/PhiGnX7qKzzgn2aKb) at "Failed Utopia #4-2" in the comments section, because my quasi-religious psychological-sex-differences denialism required that I be horrified. But looking back a dozen years later—[or even four years later](https://www.greaterwrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2/comment/D34jhYBcaoE7DEb8d)—my performative horror was missing the point.
+
+_The argument makes sense_. Of course, it's important to notice that you'd need an additional [handwave](https://tvtropes.org/pmwiki/pmwiki.php/Main/HandWave) to explain why the AI in the story doesn't give every _individual_ their separate utopia—if existing women and men aren't optimal partners for each other, so too are individual men not optimal same-sex friends for each other. A faithful antisexist (as I was) might insist that that should be the _only_ moral, as it implies the other [_a fortiori_](https://en.wikipedia.org/wiki/Argumentum_a_fortiori). But if you're trying to _learn about reality_ rather than protect your fixed quasi-religious beliefs, it should be _okay_ for one of the lessons to get a punchy sci-fi short story; it should be _okay_ to think about the hyperplane between two coarse clusters, even while it's simultaneously true that a set of hyperplanes would suffice to [shatter](https://en.wikipedia.org/wiki/Shattered_set) every individual point, without deigning to acknowledge the existence of clusters.