-Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a decent teacher.)
-
-["Never go in against Eliezer Yudkowsky when anything is on the line."](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to.
-
-[Yudkowsky writes](https://twitter.com/ESYudkowsky/status/1096769579362115584):
-
-> When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes.
-
-I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show, rather than a misperception on your part. I am left in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) shaking my head sadly about the mortal frailty of my former hero, and shaking my head in disgust at his craven duplicity. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_.
-
--------
-
-[TODO section existential stakes, cooperation
-
- * At least, that's what I would say, if there weren't—potentially mitigating factors—or at least, reasons why being a fraud might be a good idea (since I don't want the definition of "fraud" to change depending on whether the world is at stake)
-
- * So far, I've been writing from the perspective of caring about human rationality, [the common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Wanting the world to make sense, and expecting the "rationalists" specifically to _especially_ make sense.
-
- * But Yudkowsky wrote the Sequences as a recursive step (/2019/Jul/the-source-of-our-power/) for explaining the need for Friendly AI ("Value Is Fragile"); everything done with an ulterior motive has to be done with a pure heart, but the Singularity has always been his no. 1 priority
-
- * Fighting for good epistemology is a long battle, and makes more sense if you have _time_ for it to pay off.
-
- * Back in the 'aughts, it looked like we had time. We had abstract high-level arguments to worry about AI, and it seemed like it was going to happen this century, but it felt like the _second_ half of the 21st century.
-
- * Now it looks like we have—less time? (The 21st century is one-fifth over!) Yudkowsky flipped out about AlphaGo and AlphaZero, and at the time, a lot of people probably weren't worried (board games are a shallow domain), but now that it's happening for "language" (GPT) and "vision" (DALL-E), a lot of people including me are feeling much more spooked (https://twitter.com/zackmdavis/status/1536364192441040896)
-
- * [Include the joke about DALL-E being the most significant news event of that week in January 2021]
-
- * [As recently as 2020](/2020/Aug/memento-mori/#if-we-even-have-enough-time) I was daydreaming about working for an embryo selection company as part of the "altruistic" (about optimizing the future, rather than about my own experiences) component of my actions—I don't feel like there's time for that anymore
-
- * If you have short timelines, and want to maintain influence over what big state-backed corporations are doing, self-censoring about contradicting the state religion makes sense. There's no time to win a culture war; we need to grab hold of the Singularity now!!
-
- * So isn't there a story here where I'm the villain for not falling in line and accepting orders? Aren't I causing damage, losing dignity points for our Earth, by criticizing Yudkowsky so harshly, when actually the rest of the world should listen to him more about the coming robot apocalypse?
-
-]
-
-> [_Perhaps_, replied the cold logic. _If the world were at stake._
->
-> _Perhaps_, echoed the other part of himself, _but that is not what was actually happening._](https://www.yudkowsky.net/other/fiction/the-sword-of-good)
-
-[TODO: social justice and defying threats
-
- * There's _no story_ in which misleading people about transgender is on Yudkowsky's critical path for shaping the intelligence explosion. _I'd_ prefer him to have free speech, but if he can't afford to be honest about things he already got right in 2009, he could just—not bring up the topic!
-
- * I can totally cooperate with censorship that doesn't actively intefere with my battle! I agree that there are plenty of times in life where you need to say "No comment." But if that's the play you want to make, you have to actually _not comment_. "20% of the ones with penises" is no "No comment"! "You're not standing in defense of truth" is not "No comment"! "The simplest and best proposal" is not "No comment"!
-
-https://twitter.com/esyudkowsky/status/1374161729073020937
-> Also: Having some things you say "no comment" to, is not at *all* the same phenomenon as being an organization that issues Pronouncements. There are a *lot* of good reasons to have "no comments" about things. Anybody who tells you otherwise has no life experience, or is lying.
-
- * I don't pick fights with Paul Christiano, because Paul Christiano doesn't take a shit on my Something to Protect, because Paul Christiano isn't trying to be a religious leader. If he has opinions about transgenderism, we don't know about them.
-
- * The cowardice is particularly puzzling in light of his timeless decision theory, which says to defy extortion.
-
- * Of course, there's a lot of naive misinterpretations of TDT that don't understand counterfactual dependence. There's a perspective that says, "We don't negotiate with terrorists, but we do appease bears", because the bear's response isn't calculated based on our response. /2019/Dec/political-science-epigrams/
-
- * You could imagine him mocking me for trying to reason this out, instead of just using honor. "That's right, I'm appealing to your honor, goddamn it!"
-
- * back in 'aught-nine, SingInst had made a point of prosecuting Tyler Emerson, citing decision theory
-
- * But the parsing of social justice as an agentic "threat" to be avoided rather than a rock to be dodged does seem to line up with the fact that people punish heretics more than infidels.
-
- * But it matters where you draw the zero point: is being excluded from the coalition a "punishment" to threaten you out of bad behavior, or is being included a "reward" for good behavior?
-
- * Curtis Yarvin has compared Yudkowsky to Sabbatai Zevi (/2020/Aug/yarvin-on-less-wrong/), and I've got to say the comparison is dead-on. Sabbatai Zevi was facing much harsher coercion: his choices were to convert to Islam or be impaled https://en.wikipedia.org/wiki/Sabbatai_Zevi#Conversion_to_Islam
-
-]
-
-I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_.
-
-I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)).
-
-I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4.
-
-But if he's _then_ going to take a shit on c3 of my chessboard (["important things [...] would be all the things I've read [...] from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate", "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he"'"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), the "playing on a different chessboard, no harm intended" excuse loses its credibility. The turd on c3 is a pretty big likelihood ratio! (That is, I'm more likely to observe a turd on c3 in worlds where Yudkowsky _is_ playing my chessboard and wants me to lose, than in world where he's playing on a different chessboard and just _happened_ to take a shit there, by coincidence.)
-
------
-
-In June 2021, MIRI Executive Director Nate Soares [wrote a Twitter thread aruging that](https://twitter.com/So8res/status/1401670792409014273) "[t]he definitional gynmastics required to believe that dolphins aren't fish are staggering", which [Yudkowsky retweeted](https://archive.is/Ecsca).[^not-endorsements]
-
-[^not-endorsements]: In general, retweets are not necessarily endorsements—sometimes people just want to draw attention to some content without further comment or implied approval—but I was inclined to read this instance as implying approval, partially because this doesn't seem like the kind of thing someone would retweet for attention-without-approval, and partially because of the working relationship between Soares and Yudkowsky.
-
-Soares's points seemed cribbed from part I of Scott Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), which post I had just dedicated _more than three years of my life_ to rebutting in [increasing](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) [technical](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [detail](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), _specifically using dolphins as my central example_—which Soares didn't necessarily have any reason to have known about, but Yudkowsky (who retweeted Soares) definitely did. (Soares's [specific reference to the Book of Jonah](https://twitter.com/So8res/status/1401670796997660675) made it seem particularly unlikely that he had invented the argument independently from Alexander.) [One of the replies (which Soares Liked) pointed out the similar _Slate Star Codex_ article](https://twitter.com/max_sixty/status/1401688892940509185), [as did](https://twitter.com/NisanVile/status/1401684128450367489) [a couple of](https://twitter.com/roblogic_/status/1401699930293432321) quote-Tweet discussions.
-
-The elephant in my brain took this as another occasion to _flip out_. I didn't _immediately_ see anything for me to overtly object to in the thread itself—[I readily conceded that](https://twitter.com/zackmdavis/status/1402073131276066821) there was nothing necessarily wrong with wanting to use the symbol "fish" to refer to the cluster of similarities induced by convergent evolution to the acquatic habitat rather than the cluster of similarities induced by phylogenetic relatedness—but in the context of our subculture's history, I read this as Soares and Yudkowsky implicitly lending more legitimacy to "... Not Man for the Categories", which was _hostile to my interests_. Was I paranoid to read this as a potential [dogwhistle](https://en.wikipedia.org/wiki/Dog_whistle_(politics))? It just seemed _implausible_ that Soares would be Tweeting that dolphins are fish in the counterfactual in which "... Not Man for the Categories" had never been published.
-
-After a little more thought, I decided the thread _was_ overtly objectionable, and [quickly wrote up a reply on _Less Wrong_](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins): Soares wasn't merely advocating for a "swimmy animals" sense of the word _fish_ to become more accepted usage, but specifically deriding phylogenetic definitions as unmotivated for everyday use ("definitional gynmastics [_sic_]"!), and _that_ was wrong. It's true that most language users don't directly care about evolutionary relatedness, but [words aren't identical with their definitions](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels). Genetics is at the root of the causal graph underlying all other features of an organism; creatures that are more closely evolutionarily related are more similar _in general_. Classifying things by evolutionary lineage isn't an arbitrary æsthetic whim by people who care about geneology for no reason. We need the natural category of "mammals (including marine mammals)" to make sense of how dolphins are warm-blooded, breathe air, and nurse their live-born young, and the natural category of "finned cold-blooded vertebrate gill-breathing swimmy animals (which excludes marine mammals)" is also something that it's reasonable to have a word for.