+[^sloppy]: It was unevenly sloppy of the _Times_ to link the first post, ["Three Great Articles On Poverty, And Why I Disagree With All Of Them"](https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/), but not the second, ["Against Murderism"](https://slatestarcodex.com/2017/06/21/against-murderism/)—especially since "Against Murderism" is specifically about Alexander's skepticism of _racism_ as an explanatory concept and therefore contains objectively more damning sentences to quote out of context than a passing reference to Charles Murray. Apparently, the _Times_ couldn't even be bothered to smear Scott with misconstruals of his actual ideas, if guilt by association did the trick with less effort on the part of both journalist and reader.
+
+But Alexander only "aligned himself with Murray" in ["Three Great Articles On Poverty, And Why I Disagree With All Of Them"](https://slatestarcodex.com/2016/05/23/three-great-articles-on-poverty-and-why-i-disagree-with-all-of-them/) in the context of a simplified taxonomy of views on the etiology of poverty. This doesn't imply agreement with Murray's views on heredity! (A couple of years earlier, Alexander had written that ["Society Is Fixed, Biology Is Mutable"](https://slatestarcodex.com/2014/09/10/society-is-fixed-biology-is-mutable/): pessimism about our Society's ability to intervene to alleviate poverty does not amount to the claim that poverty is "genetic.")
+
+[Alexander's reply statement](https://astralcodexten.substack.com/p/statement-on-new-york-times-article) pointed out the _Times_'s obvious chicanery, but (I claim) introduced a distortion of its own—
+
+> The Times points out that I agreed with Murray that poverty was bad, and that also at some other point in my life noted that Murray had offensive views on race, and heavily implies this means I agree with Murray's offensive views on race. This seems like a weirdly brazen type of falsehood for a major newspaper.
+
+It _is_ a weirdly brazen invalid inference. But by calling it a "falsehood", Alexander heavily implies he disagrees with Murray's offensive views on race: in invalidating the _Times_'s charge of guilt by association with Murray, Alexander validates Murray's guilt.
+
+But anyone who's read _and understood_ Alexander's work should be able to infer that Scott probably finds it plausible that there exist genetically mediated differences in socially relevant traits between ancestry groups (as a value-free matter of empirical science with no particular normative implications). For example, his [review of Judith Rich Harris](https://archive.ph/Zy3EL) indicates that he accepts the evidence from [twin studies](/2020/Apr/book-review-human-diversity/#twin-studies) for individual behavioral differences having a large genetic component, and section III. of his ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) indicates that he accepts genetics as an explanation for group differences in the particular case of Ashkenazi Jewish intelligence.[^murray-alignment]
+
+[^murray-alignment]: As far as aligning himself with Murray more generally, it's notable that Alexander had tapped Murray for Welfare Czar in [a hypothetical "If I were president" Tumblr post](https://archive.vn/xu7PX).
+
+There are a lot of standard caveats that go here which Scott would no doubt scrupulously address if he ever chose to tackle the subject of genetically-mediated group differences in general: [the mere existence of a group difference in a "heritable" trait doesn't itself imply a genetic cause of the group difference (because the groups' environments could also be different)](/2020/Apr/book-review-human-diversity/#heritability-caveats). It is entirely conceivable that the Ashkenazi IQ advantage is real and genetic, but black–white IQ gap is fake and environmental.[^bet] Moreover, group averages are just that—averages. They don't imply anything about individuals and don't justify discrimination against individuals.
+
+[^bet]: It's just—how much do you want to bet on that? How much do you think _Scott_ wants to bet?
+
+But anyone who's read _and understood_ Charles Murray's work, knows that [Murray also includes the standard caveats](/2020/Apr/book-review-human-diversity/#individuals-should-not-be-judged-by-the-average)![^murray-caveat] (Even though the one about group differences not implying anything about individuals is [actually wrong](/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/).) The _Times_'s insinuation that Scott Alexander is a racist _like Charles Murray_ seems like a "[Gettier](https://en.wikipedia.org/wiki/Gettier_problem) attack": the charge is essentially correct, even though the evidence used to prosecute the charge before a jury of distracted _New York Times_ readers is completely bogus.
+
+[^murray-caveat]: For example, the introductory summary for Ch. 13 of _The Bell Curve_, "Ethnic Differences in Cognitive Ability", states: "Even if the differences between races were entirely genetic (which they surely are not), it should make no practical difference in how individuals deal with each other."
+
+Why do I [keep](/2023/Nov/if-clarity-seems-like-death-to-them/#tragedy-of-recursive-silencing) [bringing](/2023/Nov/if-clarity-seems-like-death-to-them/#literally-a-white-supremacist) up the claim that "rationalist" leaders almost certainly believe in cognitive race differences (even if it's hard to get them to publicly admit it in a form that's easy to selectively quote in front of _New York Times_ readers)?
+
+It's because one of the things I noticed while trying to make sense of why my entire social circle suddenly decided in 2016 that guys like me could become women by saying so, is that in the conflict between the "rationalists" and mainstream progressives, the defensive strategy of the "rationalists" is one of deception.
+
+In this particular historical moment, we end up facing pressure from progressives, because—whatever our object-level beliefs about (say) [sex, race, and class differences](/2020/Apr/book-review-human-diversity/), and however much most of us would prefer not to talk about them—on the _meta_ level, our creed requires us to admit it's an empirical question, not a moral one—and that [empirical questions have no privileged reason to admit convenient answers](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god).
+
+I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than being specific to American politics or "the left". In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a positive [technical](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) [discipline](https://www.lesswrong.com/posts/Gk8Dvynrr9FWBztD4/what-s-a-market) adjacent to game theory, not yoked to a particular normative agenda).[^logical-induction]
+
+[^logical-induction]: I wonder how hard it would have been to come up with MIRI's [logical induction result](https://arxiv.org/abs/1609.03543) (which describes an asymptotic algorithm for estimating the probabilities of mathematical truths in terms of a betting market composed of increasingly complex traders) in the Soviet Union.
+
+Incidental or not, the conflict is real, and everyone smart knows it—even if it's not easy to _prove_ that everyone smart knows it, because everyone smart is very careful about what they say in public. (I am not smart.)
+
+So the _New York Times_ implicitly accuses us of being racists, like Charles Murray, and instead of pointing out that being a racist _like Charles Murray_ is the obviously correct position that sensible people will tend to reach in the course of being sensible, we disingenuously deny everything.[^deny-everything]
+
+[^deny-everything]: In January 2023, when Nick Bostrom [preemptively apologized for a 26-year-old email to the Extropians mailing list](https://nickbostrom.com/oldemail.pdf) that referenced the IQ gap and mentioned a slur, he had [some](https://forum.effectivealtruism.org/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi) [detractors](https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old) and a [few](https://ea.greaterwrong.com/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi/comment/h9gdA4snagQf7bPDv) [defenders](https://forum.effectivealtruism.org/posts/NniTsDNQQo58hnxkr/my-thoughts-on-bostrom-s-apology-for-an-old-email), but I don't recall seeing anyone defending the 1996 email itself.
+
+ But if you're [familiar with the literature](/2020/Apr/book-review-human-diversity/#the-reason-everyone-and-her-dog-is-still-mad) and understand the [use–mention distinction](https://en.wikipedia.org/wiki/Use%E2%80%93mention_distinction), the literal claims in [the original email](https://nickbostrom.com/oldemail.pdf) are entirely reasonable. (There are additional things one could say about [what prosocial functions are being served by](/2020/Apr/book-review-human-diversity/#schelling-point-for-preventing-group-conflicts) the taboos against what the younger Bostrom called "the provocativeness of unabashed objectivity", which would make for fine mailing-list replies, but the original email can't be abhorrent simply for failing to anticipate all possible counterarguments.)
+
+ I didn't speak up at the time of the old-email scandal, either. I had other things to do with my attention and Overton budget.
+
+It works surprisingly well. I fear my love of Truth is not so great that if I didn't have Something to Protect, I would have happily participated in the cover-up.
+
+As it happens, in our world, the defensive cover-up consists of _throwing me under the bus_. Facing censure from the progressive egregore for being insufficiently progressive, we can't defend ourselves ideologically. (We think we're egalitarians, but progressives won't buy that because we like markets too much.) We can't point to our racial diversity. (Mostly white if not Jewish, with a handful of East and South Asians, exactly as you'd expect from chapters 13 and 14 of _The Bell Curve_.) [Subjectively](https://en.wikipedia.org/wiki/Availability_heuristic), I felt like the sex balance got a little better after we hybridized with Tumblr and Effective Altruism (as [contrasted with the old days](/2017/Dec/a-common-misunderstanding-or-the-spirit-of-the-staircase-24-january-2009/)), but survey data doesn't unambiguously back this up.[^survey-data]
+
+[^survey-data]: We go from 89.2% male in the [2011 _Less Wrong_ survey](https://www.lesswrong.com/posts/HAEPbGaMygJq8L59k/2011-survey-results) to a virtually unchanged 88.7% male on the [2020 _Slate Star Codex_ survey](https://slatestarcodex.com/2020/01/20/ssc-survey-results-2020/)—although the [2020 EA survey](https://forum.effectivealtruism.org/posts/ThdR8FzcfA8wckTJi/ea-survey-2020-demographics) says only 71% male, so it depends on how you draw the category boundaries of "we."
+
+But _trans!_ We have plenty of those! In [the same blog post in which Scott Alexander characterized rationalism as the belief that Eliezer Yudkowsky is the rightful caliph](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), he also named "don't misgender trans people" as one of the group's distinguishing norms. Two years later, he joked that ["We are solving the gender ratio issue one transition at a time"](https://slatestarscratchpad.tumblr.com/post/142995164286/i-was-at-a-slate-star-codex-meetup).
+
+The benefit of having plenty of trans people is that high-ranking members of the [progressive stack](https://en.wikipedia.org/wiki/Progressive_stack) can be trotted out as a shield to prove that we're not counterrevolutionary right-wing Bad Guys. Thus, [Jacob Falkovich noted](https://twitter.com/yashkaf/status/1275524303430262790) (on 23 June 2020, just after _Slate Star Codex_ went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders", and Scott Aaronson [noted (in commentary on the February 2021 _Times_ article) that](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" should have "complicated the picture" of our portrayal as anti-feminist.
+
+Even the haters grudgingly give Alexander credit for ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/): ["I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least acknowledging you were wrong"](https://archive.is/SlJo1), wrote one.
+
+<a id="dump-stats"></a>Under these circumstances, dethroning the supremacy of gender identity ideology is politically impossible. All our [Overton margin](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) is already being spent somewhere else; sanity on this topic is our [dump stat](https://tvtropes.org/pmwiki/pmwiki.php/Main/DumpStat).
+
+But this being the case, _I have no reason to participate in the cover-up_. What's in it for me? Why should I defend my native subculture from external attack, if the defense preparations themselves have already rendered it uninhabitable to me?
+
+On 17 February 2021, Topher Brennan [claimed that](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857) Scott Alexander "isn't being honest about his history with the far-right", and published [an email he had received from Scott in February 2014](https://emilkirkegaard.dk/en/2021/02/backstabber-brennan-knifes-scott-alexander-with-2014-email/) on what Scott thought some neoreactionaries were getting importantly right.
+
+I think that to people who have read _and understood_ Alexander's work, there is nothing surprising or scandalous about the contents of the email. He said that biologically mediated group differences are probably real and that neoreactionaries were the only people discussing the object-level hypotheses or the meta-level question of why our Society's intelligentsia is obfuscating the matter. He said that reactionaries as a whole generate a lot of garbage but that he trusted himself to sift through the noise and extract the novel insights. The email contains some details that Alexander hadn't already blogged about—most notably the section headed "My behavior is the most appropriate response to these facts", explaining his social strategizing _vis á vis_ the neoreactionaries and his own popularity. But again, none of it is surprising if you know Scott from his writing.
+
+I think the main reason someone _would_ consider the email a scandalous revelation is if they hadn't read _Slate Star Codex_ that deeply—if their picture of Scott Alexander as a political writer was "that guy who's so committed to charitable discourse that he [wrote up an explanation of what _reactionaries_ (of all people) believe](https://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/)—and then [turned around and wrote up the definitive explanation of why they're totally wrong and you shouldn't pay them any attention](https://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/)." As a first approximation, it's not a terrible picture. But what it misses—what _Scott_ knows—is that charity isn't about putting on a show of superficially respecting your ideological opponent before concluding (of course) that they're wrong. Charity is about seeing what the other guy is getting _right_.
+
+The same day, Yudkowsky published [a Facebook post](https://www.facebook.com/yudkowsky/posts/pfbid02ZoAPjap94KgiDg4CNi1GhhhZeQs3TeTc312SMvoCrNep4smg41S3G874saF2ZRSQl) that said[^brennan-condemnation-edits]:
+
+> I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics and will hurt you if you trust them, but in case it wasn't obvious consider the point made explicitly. (Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)
+
+[^brennan-condemnation-edits]: The post was subsequently edited a number of times in ways that I don't think are relevant to my discussion here.
+
+I was annoyed at how the discussion seemed to be ignoring the obvious political angle, and the next day, 18 February 2021, I wrote [a widely Liked comment](/images/davis-why_they_say_they_hate_us.png): I agreed that there was a grain of truth to the claim that our detractors hate us because they're evil bullies, but stopping the analysis there seemed incredibly shallow and transparently self-serving.
+
+If you listened to why _they_ said they hated us, it was because we were racist, sexist, transphobic fascists. The party-line response seemed to be trending toward, "That's obviously false—Scott voted for Warren, look at all the social democrats on the _Less Wrong_/_Slate Star Codex_ surveys, _&c._ They're just using that as a convenient smear because they like bullying nerds."
+
+But if "sexism" included "It's an empirical question whether innate statistical psychological sex differences of some magnitude exist, it empirically looks like they do, and this has implications about our social world" (as articulated in, for example, Alexander's ["Contra Grant on Exaggerated Differences"](https://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/)), then the "_Slate Star Codex_ _et al._ are crypto-sexists" charge was absolutely correct. (Crypto-racist, crypto-fascist, _&c._ left as an exercise for the reader.)
+
+You could plead, "That's a bad definition of sexism," but that's only convincing if you've been trained in using empiricism and open discussion to discover policies with utilitarian-desirable outcomes. People whose education came from California public schools plus Tumblr didn't already know that. ([I didn't know that](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism) at age 18 back in 'aught-six, and we didn't even have Tumblr then.) In that light, you could see why someone might find "blow the whistle on people who are claiming to be innocent but are actually guilty (of thinking bad thoughts)" to be a more compelling ethical consideration than "respect confidentiality requests".
+
+Indeed, it seems important to note (though I didn't at the time of my comment) that Brennan didn't break any promises. In [Brennan's account](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857), Alexander "did not first say 'can I tell you something in confidence?' or anything like that." Scott unilaterally said in the email, "I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by 'appreciate', I mean that if you ever do, I'll probably either leave the Internet forever or seek some sort of horrible revenge", but we have no evidence that Topher agreed.
+
+To see why the lack of a promise is significant, imagine if someone were guilty of a serious crime (like murder or [stealing billions of dollars of their customers' money](https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy)) and unilaterally confessed to an acquaintance but added, "Never tell anyone I said this, or I'll seek some sort of horrible revenge." In that case, I think more people's moral intuitions would side with the whistleblower and against "privacy".
+
+Here, I don't think Scott has anything to be ashamed of—but that's because I don't think learning from right-wingers is a crime. If our actual problem was "Genuinely consistent rationalism is realistically always going to be an enemy of the state, because [the map that fully reflects the territory is going to include facts that powerful coalitions would prefer to censor, no matter what specific ideology happens to be on top in a particular place and time](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting)", but we thought our problem was "We need to figure out how to exclude evil bullies", then we were in trouble!
+
+Yudkowsky [replied that](/images/yudkowsky-we_need_to_exclude_evil_bullies.png) everyone had a problem of figuring out how to exclude evil bullies. We also had an inevitable [Kolmogorov complicity](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) problem, but that shouldn't be confused with the evil bullies issue, even if bullies attack via Kolmogorov issues.
+
+I'll agree that the problems shouldn't be confused. Psychology is complicated, and people have more than one reason for doing things: I can easily believe that Brennan was largely driven by bully-like motives even if he told himself a story about being a valiant whistleblower defending Cade Metz's honor against Scott's deception.
+
+But I think it's important to notice both problems, instead of pretending that the only problem was Brennan's disregard for Alexander's privacy. It's one thing to believe that people should keep promises that they, themselves, explicitly made. But instructing commenters not to link to the email seems to imply not just that Brennan should keep _his_ promises, but that everyone else is obligated to participate in a conspiracy to conceal information that Alexander would prefer concealed. I can see an ethical case for it, analogous to returning stolen property after it's already been sold and expecting buyers not to buy items that they know have been stolen. (If Brennan had obeyed Alexander's confidentiality demand, we wouldn't have an email to link to, so if we wish Brennan had obeyed, we can just act as if we don't have an email to link to.)
+
+But there's also a non-evil-bully case for wanting to reveal information, rather than participate in a cover-up to protect the image of the "rationalists" as non-threatening to the progressive egregore. If the orchestrators of the cover-up can't even acknowledge to themselves that they're orchestrating a cover-up, they're liable to be confusing themselves about other things, too.
+
+As it happened, I had another social media interaction with Yudkowsky that same day, 18 February 2021. Concerning the psychology of people who hate on "rationalists" for alleged sins that don't particularly resemble anything we do or believe, [he wrote](https://twitter.com/ESYudkowsky/status/1362514650089156608):
+
+> Hypothesis: People to whom self-awareness and introspection come naturally, put way too much moral exculpatory weight on "But what if they don't know they're lying?" They don't know a lot of their internals! And don't want to know! That's just how they roll.
+
+In reply, Michael Vassar tagged me. "Michael, I thought you weren't talking to me [(after my failures of 18–19 December)](/2023/Dec/if-clarity-seems-like-death-to-them/#a-dramatic-episode-that-would-fit-here-chronologically)?" [I said](https://twitter.com/zackmdavis/status/1362549606538641413). "But yeah, I wrote a couple blog posts about this thing", linking to ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist) and ["Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle"](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)
+
+After a few moments, I decided it was better if I [explained the significance of Michael tagging me](https://twitter.com/zackmdavis/status/1362555980232282113):
+
+> Oh, maybe it's relevant to note that those posts were specifically part of my 21-month rage–grief campaign of being furious at Eliezer all day every day for lying-by-implicature about the philosophy of language? But, I don't want to seem petty by pointing that out! I'm over it!
+
+And I think I _would_ have been over it ...
+
+—except that Yudkowsky reopened the conversation four days later, on 22 February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions. It concludes that "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_."
+
+(Why!? Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging? Did my highly-Liked Facebook comment and Twitter barb about him lying by implicature temporarily bring my concerns to the top of his attention, despite the fact that I'm generally not that important?)
+
+I eventually explained what was wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/),[^challenges-title] but that post focused on the object-level arguments; I have more to say here (that I decided to cut from "Challenges") about the meta-level political context. The February 2021 post on pronouns is a fascinating document, in its own way—a penetrating case study on the effects of politics on a formerly great mind.
+
+[^challenges-title]: The title is an allusion to Yudkowsky's ["Challenges to Christiano's Capability Amplification Proposal"](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal).