-In January 2009, Yudkowsky published ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), essentially a revision of [a 2004 mailing list post responding to a man who said that after the Singularity, he'd like to make a female but "otherwise identical" copy of himself](https://archive.is/En6qW). "Changing Emotions" insightfully points out [the deep technical reasons why](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard) men who sexually fantasize about being women can't achieve their dream with forseeable technology—and not only that, but that the dream itself is conceptually confused: a man's fantasy-about-it-being-fun-to-be-a-woman isn't part of the female distribution; there's a sense in which it _can't_ be fulfilled.
-
-It was a good post! Though Yudkowsky was merely using the sex change example to illustrate [a more general point about the difficulties of applied transhumanism](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard), "Changing Emotions" was hugely influential on me; I count myself much better off for having understood the argument.
-
-But then, in a March 2016 Facebook post, Yudkowsky [proclaimed that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women."
-
-This seemed like a huge and surprising reversal from the position articulated in "Changing Emotions"! The two posts weren't _necessarily_ inconsistent, _if_ you assumed gender identity is an objectively real property synonymous with "brain sex", and that "Changing Emotions"'s harsh (almost mocking) skepticism of the idea of true male-to-female sex change was directed at the sex-change fantasies of _cis_ men (with a male gender-identity/brain-sex), whereas the 2016 Facebook post was about _trans women_ (with a female gender-identity/brain-sex), which are a different thing.
-
-But this potential unification seemed very dubious to me, especially if "actual" trans women were purported to be "at least 20% of the ones with penises" (!!) in some population. _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_.
-
-[TODO recap cont'd—
-
- * October 2016, I wrote to Yudkowsky noting that he seemed to have made an a massive update and asked to talk about it (for $1000, under the [cheerful price protocol](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price))
- * because of the privacy rules I'm following under this document, can't confirm or deny whether he accepted
- * November 2018, "hill of validity" Twitter thread
- * with the help of Michael/Sarah/Ben/Jessica, I wrote to him multiple times trying to clarify
- * I eventually wrote "Where to Draw the Boundaries?", which includes a verbatim quotes explaining what's wrong with the "it is not a secret"
- * we eventually got a clarification in September 2020
- * I was satisfied, and then ...
- * February 2021, "simplest and best proposal"
- * But this is _still_ wrong, as explained in "Challenges"
-]
-
-At the start, I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women.
-
-At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards to sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_.
-
-On "his turn", he comes up with some pompous proclamation that's very obviously optimized to make the "pro-trans" faction look smart and good and make the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles."
-
-On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying making any unambiguously "false" atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them.
-
-In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons.
-
-Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not use any of a whole list of banned happiness-producing drugs, it might switch to researching new drugs, or just _pay_ humans to take heroin, _&c._
-
-It's the same thing with Yudkowsky's political risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that [that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out [that's not what's going on](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of _something_—but at this point, _I have no reason to care_. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a _waste of everyone's time_; he has a [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) that doesn't involve trying to inform you.
-
-Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith.
-
-Accordingly, I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be _allowed to notice_ the nearest-unblocked-strategy game which is _very obviously happening_ if you look at the history of what was said. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level _and_ the meta level after which there's nothing left for me to do but jump up to the meta-meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence for the assumption of good faith to have been _empirically falsified_.
-
-(Obviously, if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a _pretty good_ job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly mendaciously arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate.)
-
-What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing or forseeable technology because of how complicated humans (and therefore human sex differences) are? Not original to me! I [filled in a few technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries [compress](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length) your [anticipated experiences](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences)? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)
-
-Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say in the current year, are almost entirely things he _already said_, that anyone could just look up!
-
-I guess the point is that the egregore doesn't have the reading comprehension for that?—or rather, the egregore has no reason to care about the past; if you get tagged by the mob as an Enemy, your past statements will get dug up as evidence of foul present intent, but if you're doing good enough of playing the part today, no one cares what you said in 2009?
-
-Does ... does he expect the rest of us not to _notice_? Or does he think that "everybody knows"?
-
-But I don't, think that everybody knows. And I'm not, giving up that easily. Not on an entire subculture full of people.
-
-Yudkowsky [defends his behavior](https://twitter.com/ESYudkowsky/status/1356812143849394176):
-
-> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either.
-
-There are a number of things that could be said to this,[^number-of-things] but most importantly: the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" _vs._ "anti-trans". (The central tendency of the contemporary trans rights movement is firmly on the Wrong Side, but that's not the same thing as all trans people as individuals.) That's why Jessica joined our posse to try to argue with Yudkowsky in early 2019. (She wouldn't have, if my objection had been, "trans is Wrong; trans people Bad".) That's why Somni—one of the trans women who [infamously protested the 2019 CfAR reunion](https://www.ksro.com/2019/11/18/new-details-in-arrests-of-masked-camp-meeker-protesters/) for (among other things) CfAR allegedly discriminating against trans women—[understands what I've been saying](https://somnilogical.tumblr.com/post/189782657699/legally-blind).
-
-[^number-of-things]: Note the striking contrast between ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument), in which the Yudkowsky of 2007 wrote that a campaign manager "crossed the line [between rationality and rationalization] at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it"; and these 2021 Tweets, in which Yudkowsky seems completely nonchalant about "not have been as willing to tweet a truth helping" one side of a cultural dispute, because "this battle just isn't that close to the top of [his] priority list". Well, sure! Any hired campaign manager could say the same: helping the electorate make an optimally informed decision just isn't that close to the top of their priority list, compared to getting paid.
-
- Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems incredibly dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors on both political sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to proportionately _more_ errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics!
-
-The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently [stated by Scott Alexander in November 2014](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (redacting the irrelevant object-level example):
-
-> I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.
-
-This is a battle between Feelings and Truth, between Politics and Truth.
-
-In order to take the side of Truth, you need to be able to tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him).
-
-You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him).
-
-And you need to be able to say, in public, that trans women are male and trans men are female _with respect to_ a female/male "sex" concept that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide).
-
-If you don't want to say those things because hurting people is wrong, then you have chosen Feelings.
-
-Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only speaking in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone.
-
-Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he consciously knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608).
-
-In making such boasts, I think Yudkowsky is opting in to being held to higher standards than other mortals. If Scott Alexander gets something wrong when I was trusting him to be right, that's disappointing, but I'm not the victim of false advertising, because Scott Alexander doesn't _claim_ to be anything more than some guy with a blog. If I trusted him more than that, that's on me.
-
-If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, _and_ refuses to acknowledge corrections (in the absence of an unsustainable 21-month nagging campaign) _and_ keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I _am_ the victim of false advertising. His marketing bluster was optimized to trick people like me into trusting him, even if my being _dumb enough to believe him_ is on me.
-
-Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a good teacher.)
-
-["Never go in against Eliezer Yudkowsky when anything is on the line."](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to.
-
-[Yudkowsky writes](https://twitter.com/ESYudkowsky/status/1096769579362115584):
-
-> When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes.
-
-I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) shaking my head sadly about the mortal frailty of my former hero, and shaking my head in disgust at his craven duplicity. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_.
-
--------
-
-... except, I would be remiss to condemn Yudkowsky without discussing—potentially mitigating factors. (I don't want to say that whether someone is a fraud should depend on whether there are mitigating factors—rather, I should discuss potential reasons why being a fraud might be the least-bad choice, when faced with a sufficiently desperate situation.)
-
-So far, I've been writing from the perspective of caring (and expecting Yudkowsky to care) about human rationality as a cause in its own right—about wanting to _make sense_, and wanting to live in a Society that made sense, for its own sake, and not as a convergently instrumental subgoal of saving the world.
-
-That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science blogging was just that good. I did do a little bit of work for the Singularity Institute back in the day (an informal internship in 'aught-nine, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic.
-
-Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was _not my problem_, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which [featured](http://zackmdavis.net/blog/2013/03/religious/) [futurist](http://zackmdavis.net/blog/2013/04/prodrome/)-[themed](http://zackmdavis.net/blog/2013/05/relativity/) [delusions](http://zackmdavis.net/blog/2013/05/relevance/), I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not.
-
-At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, more emotionally-stable people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl[ing] only upon the portion of the activism that would flow to [his] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI specifically, I was still _helping_, a good citizen according to the morality of my tribe.
-
-But fighting for public epistemology is a long battle; it makes more sense if you have _time_ for it to pay off. Back in the late 'aughts and early 'tens, it looked like we had time. We had these abstract philosophical arguments for worrying about AI, but no one really talked about _timelines_. I believed the Singularity was going to happen in the 21st century, but it felt like something to expect in the _second_ half of the 21st century.
-
-Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution.[^second-half] Yudkowsky seemed particularly [spooked by AlphaGo](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=gQzA8a989ZyGvhWv2) [and AlphaZero](https://intelligence.org/2017/10/20/alphago/) in 2016–2017.
-
-[^second-half]: In an unfinished slice-of-life short story I started writing _circa_ 2010, my protagonist (a supermarket employee resenting his job while thinking high-minded thoughts about rationality and the universe) speculates about "a threshold of economic efficiency beyond which nothing human could survive" being a tighter bound on future history than physical limits (like the heat death of the universe), and comments that "it imposes a sense of urgency to suddenly be faced with the fabric of your existence coming apart in ninety years rather than 10<sup>90</sup>."
-
- But if ninety years is urgent, what about ... nine? Looking at what deep learning can do in 2023, the idea of Singularity 2032 doesn't seem self-evidently _absurd_ in the way that Singularity 2019 seemed absurd in 2010 (correctly, as it turned out).
-
-My AlphaGo moment was 5 January 2021, when OpenAI released [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of that week in January 2021). Previous AI milestones, like GANs for a _fixed_ image class, were easier to dismiss as clever statistical tricks. If you have thousands of photographs of people's faces, I didn't feel surprised that some clever algorithm could "learn the distribution" and spit out another sample; I don't know the _details_, but it doesn't seem like scary "understanding." DALL-E's ability to _combine_ concepts—responding to "an armchair in the shape of an avacado" as a novel text prompt, rather than already having thousands of examples of avacado-chairs and just spitting out another one of those—viscerally seemed more like "real" creativity to me, something qualitatively new and scary.[^qualitatively-new]
-
-[^qualitatively-new]: By mid-2022, DALL-E 2 and Midjourney and Stable Diffusion were generating much better pictures, but that wasn't surprising. Seeing AI being able to do a thing _at all_ is the model update; AI being able to do the thing much better 18 months later feels "priced in."
-
-[As recently as 2020, I had been daydreaming about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^eugenics-altruism] contribution to the great common task. Existing companies working on embryo selection [boringly](https://archive.is/tXNbU) [market](https://archive.is/HwokV) their services as being about promoting health, but [polygenic scores should work as well for maximizing IQ as they do for minimizing cancer risk](https://www.gwern.net/Embryo-selection).[^polygenic-score] Making smarter people would be a transhumanist good in its own right, and [having smarter biological humans around at the time of our civilization's AI transition](https://www.lesswrong.com/posts/2KNN9WPcyto7QH9pi/this-failing-earth) would give us a better shot at having it go well.[^ai-transition-go-well]
-
-[^eugenics-altruism]: If it seems odd to frame _eugenics_ as "altruistic", translate it as a term of art referring to the component of my actions dedicating to optimizing the world at large, as contrasted to "selfishly" optimizing my own experiences.
-
-[^polygenic-score]: Better, actually: [the heritability of IQ is around 0.65](https://en.wikipedia.org/wiki/Heritability_of_IQ), as contrasted to [about 0.33 for cancer risk](https://pubmed.ncbi.nlm.nih.gov/26746459/).
-
-[^ai-transition-go-well]: Natural selection eventually developed intelligent creatures, but evolution didn't know what it was doing and was not foresightfully steering the outcome in any particular direction. The more humans know what we're doing, the more our will determines the fate of the cosmos; the less we know what we're doing, the more our civilization is just another primordial soup for the next evolutionary transition.
-
-But pushing on embryo selection only makes sense as an intervention for optimizing the future if AI timelines are sufficiently long, and the breathtaking pace (or too-fast-to-even-take-a-breath pace) of the deep learning revolution is so much faster than the pace of human generations, that it's starting to look unlikely that we'll get that much time. If our genetically uplifted children would need at least twenty years to grow up to be productive alignment researchers, but unaligned AI is [on track to end the world in twenty years](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines), we would need to start having those children _now_ in order for them to make any difference at all.
-
-[It's ironic that "longtermism" got traction as the word for the "EA" cause area of benefitting the far future](https://applieddivinitystudies.com/longtermism-irony/), because the decision-relevant beliefs of most of the people who think about the far future, end up working out to extreme _short_-termism. Common-sense longtermism—a longtermism that assumed there's still going to be a world of recognizable humans in 2123—_would_ care about eugenics, and would be willing to absorb political costs today in order to fight for a saner future. The story of humanity would not have gone _better_ if Galileo had declined to publish his works for fear of the Inquisition.
-
-But if you think the only hope for there _being_ a future flows through maintaining influence over what large tech companies are doing as they build transformative AI, declining to contradict the state religion makes more sense—if you don't have _time_ to win a culture war, because you need to grab hold of the Singularity (or perform a [pivotal act](https://arbital.com/p/pivotal/) to prevent it) _now_.
-
-(I remarked to "Wilhelm" in June 2022 that DeepMind changing its Twitter avatar to a rainbow variant of their logo for Pride month was a bad sign.)
-