-Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only speaking in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone.
-
-Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he consciously knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608).
-
-In making such boasts, I think Yudkowsky is opting in to being held to higher standards than other mortals. If Scott Alexander gets something wrong when I was trusting him to be right, that's disappointing, but I'm not the victim of false advertising, because Scott Alexander doesn't _claim_ to be anything more than some guy with a blog. If I trusted him more than that, that's on me.
-
-If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, _and_ refuses to acknowledge corrections (in the absence of an unsustainable 21-month nagging campaign) _and_ keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I _am_ the victim of false advertising. His marketing bluster was optimized to trick people like me into trusting him, even if my being _dumb enough to believe him_ is on me.
-
-Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a good teacher.)
-
-["Never go in against Eliezer Yudkowsky when anything is on the line."](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to.
-
-[Yudkowsky writes](https://twitter.com/ESYudkowsky/status/1096769579362115584):
-
-> When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes.
-
-I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) shaking my head sadly about the mortal frailty of my former hero, and shaking my head in disgust at his craven duplicity. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_.
-
--------
-
-... except, I would be remiss to condemn Yudkowsky without discussing—potentially mitigating factors. (I don't want to say that whether someone is a fraud should depend on whether there are mitigating factors—rather, I should discuss potential reasons why being a fraud might be the least-bad choice, when faced with a sufficiently desperate situation.)
-
-So far, I've been writing from the perspective of caring (and expecting Yudkowsky to care) about human rationality as a cause in its own right—about wanting to _make sense_, and wanting to live in a Society that made sense, for its own sake, and not as a convergently instrumental subgoal of saving the world.
-
-That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science writing was just that good. I did do a little bit of work for the Singularity Institute back in the day (an informal internship in 'aught-nine, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic.
-
-Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was _not my problem_, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which [featured](http://zackmdavis.net/blog/2013/03/religious/) [futurist](http://zackmdavis.net/blog/2013/04/prodrome/)-[themed](http://zackmdavis.net/blog/2013/05/relativity/) [delusions](http://zackmdavis.net/blog/2013/05/relevance/), I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not.
-
-At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, less crazy people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl[ing] only upon the portion of the activism that would flow to [his] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI, I was still _helping_, a good citizen according to the morality of my tribe.
-
-But fighting for public epistemology is a long battle; it makes more sense if you have _time_ for it to pay off. Back in the late 'aughts and early 'tens, it looked like we had time. We had these abstract philosophical arguments for worrying about AI, but no one really talked about _timelines_. I believed the Singularity was going to happen in the 21st century, but it felt like something to expect in the _second_ half of the 21st century.
-
-Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution during that time. Yudkowsky seemed particularly spooked by AlphaGo and AlphaZero in 2016–2017.
-
-[TODO: specifically, AlphaGo seemed "deeper" than minimax search so you shouldn't dimiss it as "meh, games", the way it rocketed past human level from self-play https://twitter.com/zackmdavis/status/1536364192441040896]
-
-My AlphaGo moment was 5 January 2021, when OpenAI released [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of that week in January 2021). Previous AI milestones, like GANs for a _fixed_ image class, were easier to dismiss as clever statistical tricks. If you have thousands and thousands of photographs of people's faces, I didn't feel surprised that some clever algorithm could "learn the distribution" and spit out another sample; I don't know the _details_, but it doesn't seem like scary "understanding." DALL-E's ability to _combine_ concepts—responding to "an armchair in the shape of an avacado" as a novel text prompt, rather than already having thousands of avacado-chairs and just spitting out another one of those—viscerally seemed more like "real" creativity to me, something qualitatively new and scary.
-
-[As recently as 2020, I had been daydreaming about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^eugenics-altruism] contribution to the great common task. Existing companies working on embryo selection [boringly](https://archive.is/tXNbU) [market](https://archive.is/HwokV) their services as being about promoting health, but [polygenic scores should work as well for maximizing IQ as they do for minimizing cancer risk](https://www.gwern.net/Embryo-selection).[^polygenic-score] Making smarter people would be a transhumanist good in its own right, and [having smarter biological humans around at the time of our civilization's AI transition](https://www.lesswrong.com/posts/2KNN9WPcyto7QH9pi/this-failing-earth) would give us a better shot at having it go well.[^ai-transition-go-well]
-
-[^eugenics-altruism]: If it seems odd to frame _eugenics_ as "altruistic", translate it as a term of art referring to the component of my actions dedicating to optimizing the world at large, as contrasted to "selfishly" optimizing my own experiences.
-
-[^polygenic-score]: Better, actually: [the heritability of IQ is around 0.65](https://en.wikipedia.org/wiki/Heritability_of_IQ), as contrasted to [about 0.33 for cancer risk](https://pubmed.ncbi.nlm.nih.gov/26746459/).
-
-[^ai-transition-go-well]: Natural selection eventually developed intelligent creatures, but evolution didn't know what it was doing and was not foresightfully steering the outcome in any particular direction. The more humans know what we're doing, the more our will determines the fate of the cosmos; the less we know what we're doing, the more our civilization is just another primordial soup for the next evolutionary transition.
-
-But pushing on embryo selection only makes sense as an intervention for optimizing the future if AI timelines are sufficiently long, and the breathtaking pace (or too-fast-to-even-take-a-breath pace) of the deep learning revolution is so much faster than the pace of human generations, that it's starting to look unlikely that we'll get that much time. If our genetically uplifted children would need at least twenty years to grow up to be productive alignment researchers, but unaligned AI is on track to end the world in twenty years, we would need to start having those children _now_ in order for them to make any difference at all.
-
-[It's ironic that "longtermism" got some traction as the word for the "EA" cause of benefitting the far future](https://applieddivinitystudies.com/longtermism-irony/), because the decision-relevant beliefs of most of the people who think about the far future work out to extreme short-termism.
-
-Common-sense longtermism—a longtermism that assumed there's still going to be a world of recognizable humans in 2123—_would_ care about eugenics, and would be willing to absorb political costs today in order to fight for a saner future. The story of humanity would not have gone _better_ if Galileo had declined to publish his theories for fear of the Inquisition.
-
-But if you think the only hope for there _being_ a future flows through maintaining influence over what big state-backed corporations are doing, declining to contradict the state religion makes more sense—if you don't have _time_ to win a culture war, because you need to grab hold of the Singularity (or perform a [pivotal act](https://arbital.com/p/pivotal/) to prevent it) _now_.
-
-[...]
-
-> [_Perhaps_, replied the cold logic. _If the world were at stake._
->
-> _Perhaps_, echoed the other part of himself, _but that is not what was actually happening._](https://www.yudkowsky.net/other/fiction/the-sword-of-good)
-
-[TODO: social justice and defying threats
-
- * There's _no story_ in which misleading people about transgender is on Yudkowsky's critical path for shaping the intelligence explosion. _I'd_ prefer him to have free speech, but if he can't afford to be honest about things he already got right in 2009, he could just—not bring up the topic!
-
-https://twitter.com/esyudkowsky/status/1374161729073020937
-> Also: Having some things you say "no comment" to, is not at *all* the same phenomenon as being an organization that issues Pronouncements. There are a *lot* of good reasons to have "no comments" about things. Anybody who tells you otherwise has no life experience, or is lying.
-
- * I can totally cooperate with censorship that doesn't actively intefere with my battle! I agree that there are plenty of times in life where you need to say "No comment." But if that's the play you want to make, you have to actually _not comment_. "20% of the ones with penises" is no "No comment"! "You're not standing in defense of truth" is not "No comment"! "The simplest and best proposal" is not "No comment"!
-
- * I don't pick fights with Paul Christiano, because Paul Christiano doesn't take a shit on my Something to Protect, because Paul Christiano isn't trying to be a religious leader. If he has opinions about transgenderism, we don't know about them.
-
- * The cowardice is particularly puzzling in light of his timeless decision theory, which says to defy extortion.
-
- * Of course, there's a lot of naive misinterpretations of TDT that don't understand counterfactual dependence. There's a perspective that says, "We don't negotiate with terrorists, but we do appease bears", because the bear's response isn't calculated based on our response. /2019/Dec/political-science-epigrams/
-
- * You could imagine him mocking me for trying to reason this out, instead of just using honor. "That's right, I'm appealing to your honor, goddamn it!"
-
- * back in 'aught-nine, SingInst had made a point of prosecuting Tyler Emerson, citing decision theory
-
- * But the parsing of social justice as an agentic "threat" to be avoided rather than a rock to be dodged does seem to line up with the fact that people punish heretics more than infidels.
-
- * But it matters where you draw the zero point: is being excluded from the coalition a "punishment" to threaten you out of bad behavior, or is being included a "reward" for good behavior?
-
- * Curtis Yarvin has compared Yudkowsky to Sabbatai Zevi (/2020/Aug/yarvin-on-less-wrong/), and I've got to say the comparison is dead-on. Sabbatai Zevi was facing much harsher coercion: his choices were to convert to Islam or be impaled https://en.wikipedia.org/wiki/Sabbatai_Zevi#Conversion_to_Islam
-
-]
-
-I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_.
-
-I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)).
-
-I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4.
-
-But if he's _then_ going to take a shit on c3 of my chessboard (["important things [...] would be all the things I've read [...] from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate", "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he"'"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), the "playing on a different chessboard, no harm intended" excuse loses its credibility. The turd on c3 is a pretty big likelihood ratio! (That is, I'm more likely to observe a turd on c3 in worlds where Yudkowsky _is_ playing my chessboard and wants me to lose, than in world where he's playing on a different chessboard and just _happened_ to take a shit there, by coincidence.)
-
------
-
-In June 2021, MIRI Executive Director Nate Soares [wrote a Twitter thread aruging that](https://twitter.com/So8res/status/1401670792409014273) "[t]he definitional gynmastics required to believe that dolphins aren't fish are staggering", which [Yudkowsky retweeted](https://archive.is/Ecsca).[^not-endorsements]
-
-[^not-endorsements]: In general, retweets are not necessarily endorsements—sometimes people just want to draw attention to some content without further comment or implied approval—but I was inclined to read this instance as implying approval, partially because this doesn't seem like the kind of thing someone would retweet for attention-without-approval, and partially because of the working relationship between Soares and Yudkowsky.
-
-Soares's points seemed cribbed from part I of Scott Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), which post I had just dedicated _more than three years of my life_ to rebutting in [increasing](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) [technical](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [detail](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), _specifically using dolphins as my central example_—which Soares didn't necessarily have any reason to have known about, but Yudkowsky (who retweeted Soares) definitely did. (Soares's [specific reference to the Book of Jonah](https://twitter.com/So8res/status/1401670796997660675) made it seem particularly unlikely that he had invented the argument independently from Alexander.) [One of the replies (which Soares Liked) pointed out the similar _Slate Star Codex_ article](https://twitter.com/max_sixty/status/1401688892940509185), [as did](https://twitter.com/NisanVile/status/1401684128450367489) [a couple of](https://twitter.com/roblogic_/status/1401699930293432321) quote-Tweet discussions.
-
-The elephant in my brain took this as another occasion to _flip out_. I didn't _immediately_ see anything for me to overtly object to in the thread itself—[I readily conceded that](https://twitter.com/zackmdavis/status/1402073131276066821) there was nothing necessarily wrong with wanting to use the symbol "fish" to refer to the cluster of similarities induced by convergent evolution to the acquatic habitat rather than the cluster of similarities induced by phylogenetic relatedness—but in the context of our subculture's history, I read this as Soares and Yudkowsky implicitly lending more legitimacy to "... Not Man for the Categories", which was _hostile to my interests_. Was I paranoid to read this as a potential [dogwhistle](https://en.wikipedia.org/wiki/Dog_whistle_(politics))? It just seemed _implausible_ that Soares would be Tweeting that dolphins are fish in the counterfactual in which "... Not Man for the Categories" had never been published.
-
-After a little more thought, I decided the thread _was_ overtly objectionable, and [quickly wrote up a reply on _Less Wrong_](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins): Soares wasn't merely advocating for a "swimmy animals" sense of the word _fish_ to become more accepted usage, but specifically deriding phylogenetic definitions as unmotivated for everyday use ("definitional gynmastics [_sic_]"!), and _that_ was wrong. It's true that most language users don't directly care about evolutionary relatedness, but [words aren't identical with their definitions](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels). Genetics is at the root of the causal graph underlying all other features of an organism; creatures that are more closely evolutionarily related are more similar _in general_. Classifying things by evolutionary lineage isn't an arbitrary æsthetic whim by people who care about geneology for no reason. We need the natural category of "mammals (including marine mammals)" to make sense of how dolphins are warm-blooded, breathe air, and nurse their live-born young, and the natural category of "finned cold-blooded vertebrate gill-breathing swimmy animals (which excludes marine mammals)" is also something that it's reasonable to have a word for.
-
-(Somehow, it felt appropriate to use a quote from Arthur Jensen's ["How Much Can We Boost IQ and Scholastic Achievement?"](https://en.wikipedia.org/wiki/How_Much_Can_We_Boost_IQ_and_Scholastic_Achievement%3F) as an epigraph.)
-
-[TODO: dolphin war con'td
-
- * Nate conceded all of my points (https://twitter.com/So8res/status/1402888263593959433), said the thread was in jest ("shitposting"), and said he was open to arguments that he was making a mistake (https://twitter.com/So8res/status/1402889976438611968), but still seemed to think his shitposting was based
-
- * I got frustrated and lashed out; "open to arguments that he was making a mistake" felt fake to me; rats are good at paying lip service to humility, but I'd lost faith in getting them to change their behavior, like not sending PageRank to "... Not Man for the Categories"
-
- * Nate wrote a longer reply on Less Wrong the next morning
-
- * I pointed out that his followup thread lamented that people hadn't read "A Human's Guide to Words", but that Sequence _specifically_ used the example of dolphins. What changed?!?
-
- * [Summarize Nate's account of his story], phylogeny not having the courage of its convictions
-
- * Twitter exchange where he said he wasn't sure I would count his self-report as evidnece, I said it totally counts
-
- * I overheated. This was an objectively dumb play. (If I had cooled down and just written up my reply, I might have gotten real engagement and a resolution, but I blew it.) I apologized a few days later.