That wasn't the end of the story, which does not have such a relatively happy ending.
+### The _New York Times_'s Other Shoe Drops (February 2021)
+
On 13 February 2021, ["Silicon Valley's Safe Space"](https://archive.ph/zW6oX), the _New York Times_ piece on _Slate Star Codex_, came out. It was ... pretty lame? (_Just_ lame, not a masterfully vicious hit piece.) Cade Metz did a mediocre job of explaining what our robot cult is about, while [pushing hard on the subtext](https://scottaaronson.blog/?p=5310) to make us look racist and sexist, occasionally resorting to odd constructions that were surprising to read from someone who had been a professional writer for decades. ("It was nominally a blog", Metz wrote of _Slate Star Codex_. ["Nominally"](https://en.wiktionary.org/wiki/nominally)?) The article's claim that Alexander "wrote in a wordy, often roundabout way that left many wondering what he really believed" seemed more like a critique of the many's reading comprehension than of Alexander's writing.
Although that poor reading comprehension may have served a protective function for Scott. A mob that attacks over things that look bad when quoted out of context can't attack you over the meaning of "wordy, often roundabout" text that they can't read. The _Times_ article included this sleazy guilt-by-association attempt:
[^murray-caveat]: For example, the introductory summary for Ch. 13 of _The Bell Curve_, "Ethnic Differences in Cognitive Ability", states: "Even if the differences between races were entirely genetic (which they surely are not), it should make no practical difference in how individuals deal with each other."
+### The Politics of the Apolitical
+
Why do I [keep](/2023/Nov/if-clarity-seems-like-death-to-them/#tragedy-of-recursive-silencing) [bringing](/2023/Nov/if-clarity-seems-like-death-to-them/#literally-a-white-supremacist) up the claim that "rationalist" leaders almost certainly believe in cognitive race differences (even if it's hard to get them to publicly admit it in a form that's easy to selectively quote in front of _New York Times_ readers)?
It's because one of the things I noticed while trying to make sense of why my entire social circle suddenly decided in 2016 that guys like me could become women by saying so, is that in the conflict between the "rationalists" and mainstream progressives, the defensive strategy of the "rationalists" is one of deception.
I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than being specific to American politics or "the left". In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a positive [technical](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) [discipline](https://www.lesswrong.com/posts/Gk8Dvynrr9FWBztD4/what-s-a-market) adjacent to game theory, not yoked to a particular normative agenda).[^logical-induction]
-[^logical-induction]: I wonder how hard it would have been to come up with MIRI's [logical induction result](https://arxiv.org/abs/1609.03543) (which describes an asymptotic algorithm for estimating the probabilities of mathematical truths in terms of a betting market composed of increasingly complex traders) in the Soviet Union.
+[^logical-induction]: I've wondered how hard it would have been to come up with MIRI's [logical induction result](https://arxiv.org/abs/1609.03543) (which describes an asymptotic algorithm for estimating the probabilities of mathematical truths in terms of a betting market composed of increasingly complex traders) in the Soviet Union.
Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to _prove_ that everyone smart knows it, because everyone smart is very careful about what they say in public. (I am not smart.)
It works surprisingly well. I fear my love of Truth is not so great that if I didn't have Something to Protect, I would have happily participated in the cover-up.
-As it happens, in our world, the defensive cover-up consists of _throwing me under the bus_. Facing censure from the progressive egregore for being insufficiently progressive, we can't defend ourselves ideologically. (We think we're egalitarians, but progressives won't buy that because we like markets too much.) We can't point to our racial diversity. (Mostly white if not Jewish, with a handful of East and South Asians, exactly as you'd expect from chapters 13 and 14 of _The Bell Curve_.) [Subjectively](https://en.wikipedia.org/wiki/Availability_heuristic), I felt like the sex balance got a little better after we hybridized with Tumblr and Effective Altruism (as [contrasted with the old days](/2017/Dec/a-common-misunderstanding-or-the-spirit-of-the-staircase-24-january-2009/)), but survey data doesn't unambiguously back this up.[^survey-data]
+As it happens, in our world, the defensive cover-up consists of _throwing me under the bus_. Facing censure from the progressive egregore for being insufficiently progressive, we can't defend ourselves ideologically. (We think we're egalitarians, but progressives won't buy that because we like markets too much.) We can't point to our racial diversity. (Mostly white if not Jewish, with a handful of East and South Asians, exactly as you'd expect from chapters 13 and 14 of _The Bell Curve_.) [Subjectively](https://en.wikipedia.org/wiki/Availability_heuristic), I felt like the sex balance got a little better after we hybridized with Tumblr and Effective Altruism (as [contrasted with the old days](/2017/Dec/a-common-misunderstanding-or-the-spirit-of-the-staircase-24-january-2009/)) but survey data doesn't unambiguously back this up.[^survey-data]
[^survey-data]: We go from 89.2% male in the [2011 _Less Wrong_ survey](https://www.lesswrong.com/posts/HAEPbGaMygJq8L59k/2011-survey-results) to a virtually unchanged 88.7% male on the [2020 _Slate Star Codex_ survey](https://slatestarcodex.com/2020/01/20/ssc-survey-results-2020/)—although the [2020 EA survey](https://forum.effectivealtruism.org/posts/ThdR8FzcfA8wckTJi/ea-survey-2020-demographics) says only 71% male, so it depends on how you draw the category boundaries of "we."
But this being the case, _I have no reason to participate in the cover-up_. What's in it for me? Why should I defend my native subculture from external attack, if the defense preparations themselves have already rendered it uninhabitable to me?
-On 17 February 2021, Topher Brennan [claimed that](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857) Scott Alexander "isn't being honest about his history with the far-right", and published [an email he had received from Scott in February 2014](https://emilkirkegaard.dk/en/2021/02/backstabber-brennan-knifes-scott-alexander-with-2014-email/) on what Scott thought some neoreactionaries were getting importantly right.
+### A Leaked Email Non-Scandal (February 2021)
-I think that to people who have read _and understood_ Alexander's work, there is nothing surprising or scandalous about the contents of the email. He said that biologically mediated group differences are probably real and that neoreactionaries were the only people discussing the object-level hypotheses or the meta-level question of why our Society's intelligentsia is obfuscating the matter. He said that reactionaries as a whole generate a lot of garbage but that he trusted himself to sift through the noise and extract the novel insights. The email contains some details that Alexander hadn't already blogged about—most notably the section headed "My behavior is the most appropriate response to these facts", explaining his social strategizing _vis á vis_ the neoreactionaries and his own popularity. But again, none of it is surprising if you know Scott from his writing.
+On 17 February 2021, Topher Brennan, disapproving of the community's deceptive defense against the _Times_, [claimed that](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857) Scott Alexander "isn't being honest about his history with the far-right", and published [an email he had received from Scott in February 2014](https://emilkirkegaard.dk/en/2021/02/backstabber-brennan-knifes-scott-alexander-with-2014-email/) on what Scott thought some neoreactionaries were getting importantly right.
+
+I think that to people who have read _and understood_ Alexander's work, there is nothing surprising or scandalous about the contents of the email. He said that biologically mediated group differences are probably real and that neoreactionaries were the only people discussing the object-level hypotheses or the meta-level question of why our Society's intelligentsia is obfuscating the matter. He said that reactionaries as a whole generate a lot of garbage but that he trusted himself to sift through the noise and extract the novel insights. The email contains some details that Alexander hadn't blogged about—most notably the section headed "My behavior is the most appropriate response to these facts", explaining his social strategizing _vis á vis_ the neoreactionaries and his own popularity. But again, none of it is surprising if you know Scott from his writing.
I think the main reason someone _would_ consider the email a scandalous revelation is if they hadn't read _Slate Star Codex_ that deeply—if their picture of Scott Alexander as a political writer was "that guy who's so committed to charitable discourse that he [wrote up an explanation of what _reactionaries_ (of all people) believe](https://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/)—and then [turned around and wrote up the definitive explanation of why they're totally wrong and you shouldn't pay them any attention](https://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/)." As a first approximation, it's not a terrible picture. But what it misses—what _Scott_ knows—is that charity isn't about putting on a show of superficially respecting your ideological opponent before concluding (of course) that they're wrong. Charity is about seeing what the other guy is getting _right_.
(Why!? Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging? Did my highly-Liked Facebook comment and Twitter barb about him lying by implicature temporarily bring my concerns to the top of his attention, despite the fact that I'm generally not that important?)
+### Reasons Someone Does Not Like to Be Tossed Into a Male Bucket or Female Bucket
+
I eventually explained what was wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/),[^challenges-title] but that post focused on the object-level arguments; I have more to say here (that I decided to cut from "Challenges") about the meta-level political context. The February 2021 post on pronouns is a fascinating document, in its own way—a penetrating case study on the effects of politics on a formerly great mind.
[^challenges-title]: The title is an allusion to Yudkowsky's ["Challenges to Christiano's Capability Amplification Proposal"](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal).
I agree that a language convention in which pronouns map to hair color doesn't seem great. The people in this world should probably coordinate on switching to a better convention, if they can figure out how.
-But taking this convention as given, a demand to be referred to as having a hair color _that one does not have_ seems outrageous to me!
+But taking this convention as given, a demand to be referred to as having a hair color _that one does not have_ seems outrageous to me!
It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of genuine nuance brought on by a genuine complication to a system that _falsely_ assumes discrete hair colors.
But extending that to the "would get hair surgery if it were safer" case is absurd. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated, nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive is obfuscation—unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts?
+### It Matters Whether People's Beliefs About Themselves Are Actually True
+
Maybe the problem is easier to see in the context of a non-gender example. My _previous_ [hopeless ideological war](/2020/Feb/if-in-some-smothering-dreams-you-too-could-pace/) was [against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I hated being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all.
I sometimes describe myself as mildly "gender dysphoric", because our culture doesn't have better widely understood vocabulary for my [beautiful pure sacred self-identity thing](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#beautiful-pure-sacred-self-identity). But if we're talking about suffering and emotional distress, my "student dysphoria" was vastly worse than any "gender dysphoria" I've ever felt.
It would seem that in the current year, that culture is dead—or if it has any remaining practitioners, they do not include Eliezer Yudkowsky.
+### A Filter Affecting Your Evidence
+
At this point, some readers might protest that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post also explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I agree that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he meant to communicate the reading that does make sense, rather than the reading that doesn't make sense?
I reply: _given that the author is Eliezer Yudkowsky_—no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just too talented a writer for me to excuse his words as accidentally unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about despite someone's "not lik[ing] to be tossed into a Male Bucket or Female Bucket", I think it's deliberately ambiguous.
If Yudkowsky actually possessed (and felt motivated to use) the "ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking", it would be obvious to him that "Gendered Pronouns for Everyone and Asking To Leave the System Is Lying" isn't the hill anyone would care about dying on if it weren't a Schelling point. A lot of TERF-adjacent folk would be overjoyed to concede the (boring, insubstantial) matter of pronouns as a trivial courtesy if it meant getting to address their real concerns of "Biological Sex Actually Exists" and ["Biological Sex Cannot Be Changed With Existing or Foreseeable Technology"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and "Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity." The reason so many of them are inclined to stand their ground and not even offer the trivial courtesy of pronouns is because they suspect, correctly, that pronouns are being used as a rhetorical wedge to keep people from talking or thinking about sex.
+### The Stated Reasons Not Being the Real Reasons Is a Form of Community Harm
+
Having analyzed the ways in which Yudkowsky is playing dumb here, what's still not entirely clear is why. Presumably he cares about maintaining his credibility as an insightful and fair-minded thinker. Why tarnish that by putting on this haughty performance?
Of course, presumably he doesn't think he's tarnishing it—but why not? [He graciously explains in the Facebook comments](/images/yudkowsky-personally_prudent_and_not_community-harmful.png):
[I tend to be hesitant to use the term "bad faith"](https://www.lesswrong.com/posts/e4GBj6jxRZcsHFSvP/assume-bad-faith), because I see it thrown around more than I think people know what it means, but it fits here. "Bad faith" doesn't mean "with ill intent", and it's more specific than "dishonest": it's [adopting the surface appearance of being moved by one set of motivations, while acting from another](https://en.wikipedia.org/wiki/Bad_faith).
-For example, an [insurance adjuster](https://en.wikipedia.org/wiki/Claims_adjuster) who goes through the motions of investigating your claim while privately intending to deny it might never consciously tell an explicit "lie" but is acting in bad faith: they're asking you questions, demanding evidence, _&c._ to make it look like you'll get paid if you prove the loss occurred—whereas in reality, you're just not going to be paid. Your responses to the claim inspector aren't casually inert: if you can make an extremely strong case that the loss occurred as you say, then the claim inspector might need to put effort into coming up with an ingenious excuse to deny your claim, in ways that exhibit general claim-inspection principles. But ultimately, the inspector is going to say what they need to say in order to protect the company's loss ratio, as is sometimes personally prudent.
+For example, an [insurance adjuster](https://en.wikipedia.org/wiki/Claims_adjuster) who goes through the motions of investigating your claim while privately intending to deny it might never consciously tell an explicit "lie", but is acting in bad faith: they're asking you questions, demanding evidence, _&c._ to make it look like you'll get paid if you prove the loss occurred—whereas in reality, you're just not going to be paid. Your responses to the claim inspector aren't casually inert: if you can make an extremely strong case that the loss occurred as you say, then the claim inspector might need to put effort into coming up with an ingenious excuse to deny your claim, in ways that exhibit general claim-inspection principles. But ultimately, the inspector is going to say what they need to say in order to protect the company's loss ratio, as is sometimes personally prudent.
With this understanding of bad faith, we can read Yudkowsky's "it is sometimes personally prudent [...]" comment as admitting that his behavior on politically charged topics is in bad faith—where "bad faith" isn't a meaningless dismissal, but [literally refers](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/) to the behavior of pretending to different motivations than one does, such that accusations of bad faith can be true or false. Yudkowsky will [take care not to consciously tell an explicit "lie"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), while going through the motions to make it look like he's genuinely engaging with questions where I need the right answers in order to make extremely impactful social and medical decisions—whereas in reality, he's only going to address a selected subset of the relevant evidence and arguments that won't get him in trouble with progressives.
In contrast, the "it is sometimes personally prudent [...] to post your agreement with Stalin" gambit is the exact reverse: it's _introducing_ a distortion into the discussion in the hopes of correcting people's beliefs about the speaker's ideological alignment. (Yudkowsky is not a right-wing Bad Guy, but people would tar him as one if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic; it's a _pure_ ass-covering move.
-Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. But the reason censorship is such an effective tool in the hands of dictators like Stalin is because it ensures that many people _don't_ know—and that those who know (or suspect) don't have [game-theoretic common knowledge](https://www.lesswrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge#Dictators_and_freedom_of_speech).
+Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. Zvi Mowshowitz has [written about how the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is used to justify deception: if "everybody knows" that we can't talk about biological sex, then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex even when it's extremely relevant.
-Zvi Mowshowitz has [written about how the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is used to justify deception: if "everybody knows" that we can't talk about biological sex, then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex even when it's extremely relevant.
-
-But if everybody knew, then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the appeal of censorship is precisely that _not_ everybody knows and that someone with power wants to keep it that way.
+But if everybody knew, then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the appeal of censorship to dictators like Stalin is precisely that _not_ everybody knows and that someone with power wants to keep it that way.
For the savvy people in the know, it would certainly be convenient if everyone secretly knew: then the savvy people wouldn't have to face the tough choice between acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath).
But if Stalin is committed to convincing gender-dysphoric males that they need to cut their dicks off, and you're committed to not disagreeing with Stalin, you _shouldn't_ mostly believe it when gender-dysphoric males thank you for providing the final piece of evidence they needed to realize that they need to cut their dicks off, for the same reason a self-aware Republican shill shouldn't uncritically believe it when people thank him for warning them against Democrat treachery. We know—he's told us very clearly—that Yudkowsky isn't trying to provide gender-dysphoric people with the full state of information that they would need to decide on the optimal quality-of-life interventions. He's playing on a different chessboard.
+### A Fire of Inner Life
+
"[P]eople do _know_ they're living in a half-Stalinist environment," Yudkowsky claims. "I think people are better off at the end of that," he says. But who are "people", specifically? One of the problems with utilitarianism is that it doesn't interact well with game theory. If a policy makes most people better off, at the cost of throwing a few others under the bus, is enacting that policy the right thing to do?
Depending on the details, maybe—but you probably shouldn't expect the victims to meekly go under the wheels without a fight. That's why I've [been](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/) [telling](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/) [you](/2023/Dec/if-clarity-seems-like-death-to-them/) this 100,000-word sob story about how _I_ didn't know, and _I'm_ not better off.
And if that's too much to expect of the general public—
-And if it's too much to expect garden-variety "rationalists" to figure out on their own without prompting from their superiors—
+And if it's too much to expect garden-variety "rationalists" to figure out on their own without prompting from their betters—
Then I would have at least expected Eliezer Yudkowsky to take actions _in favor of_ rather than _against_ his faithful students having these basic capabilities for reflection, self-observation, and ... speech? I would have expected Eliezer Yudkowsky to not _actively exert optimization pressure in the direction of transforming me into a Jane Austen character_.
+### This Isn't About Subjective Intent
+
This is the part where Yudkowsky or his flunkies accuse me of being uncharitable, of [failing at perspective-taking](https://twitter.com/ESYudkowsky/status/1435617576495714304) and [embracing conspiracy theories](https://twitter.com/ESYudkowsky/status/1708587781424046242). Obviously, Yudkowsky doesn't _think of himself_ as trying to transform his faithful students into Jane Austen characters. Perhaps, then, I have failed to understand his position? [As Yudkowsky put it](https://twitter.com/ESYudkowsky/status/1435618825198731270):
> The Other's theory of themselves usually does not make them look terrible. And you will not have much luck just yelling at them about how they must really be doing `terrible_thing` instead.
Let's recap.
+### Recap of Yudkowsky's History of Public Statements on Transgender Identity
+
In January 2009, Yudkowsky published ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), essentially a revision of [a 2004 mailing list post responding to a man who said that after the Singularity, he'd like to make a female but "otherwise identical" copy of himself](https://archive.is/En6qW). "Changing Emotions" insightfully points out [the deep technical reasons why](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard) men who sexually fantasize about being women can't achieve their dream with foreseeable technology—and not only that, but that the dream itself is conceptually confused: a man's fantasy about it being fun to be a woman isn't part of the female distribution; there's a sense in which it _can't_ be fulfilled.
It was a good post! Yudkowsky was merely using the sex change example to illustrate [a more general point about the difficulties of applied transhumanism](https://www.lesswrong.com/posts/EQkELCGiGQwvrrp3L/growing-up-is-hard), but "Changing Emotions" was hugely influential on me; I count myself much better off for having understood the argument.
End recap.
+### An Adversarial Game
+
At this point, the nature of the game is clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _zeitgeist_, subject to the constraint of [not writing any sentences he knows to be false](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases#2__The_law_of_no_literal_falsehood_). Meanwhile, I want to make sense of what's actually going on in the world as regarding sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_.
On "his turn", he comes up with some pompous proclamation that's obviously optimized to make the "pro-trans" faction look smart and good and the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles."
But I don't think that everybody knows. And I'm not giving up that easily. Not on an entire subculture full of people.
+### The Battle That Matters
+
Yudkowsky [defended his behavior in February 2021](https://twitter.com/ESYudkowsky/status/1356812143849394176):
> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either.