From 1e8cfc73ebf38341e97f19c61e98b8c9da760545 Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Fri, 12 Jan 2024 20:26:33 -0800 Subject: [PATCH] memoir: pt. 4 additional edit pass Said Achmiz said he'd have detailed comments for me, but I wanted to make another pass first. --- ...xhibit-generally-rationalist-principles.md | 102 +++++++----------- 1 file changed, 38 insertions(+), 64 deletions(-) diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index ed15e5e..3d5c063 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -9,7 +9,7 @@ Status: draft > > —_Atlas Shrugged_ by Ayn Rand -Quickly recapping my Whole Dumb Story so far: [ever since puberty, I've had this obsessive sexual fantasy about being magically transformed into a woman, which got contextualized by these life-changing Sequences of blog posts by Eliezer Yudkowsky that taught me (amongst many, many other things) how fundamentally disconnected from reality my fantasy was.](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/) [So it came as a huge surprise when, around 2016, the "rationalist" community that had formed around the Sequences seemingly unanimously decided that guys like me might actually be women in some unspecified metaphysical sense.](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/) [A couple years later, having strenuously argued against the popular misconception that the matter could be resolved by simply redefining the word _woman_ (on the grounds that you can define the word any way you like), I flipped out when Yudkowsky prevaricated about how his own philosophy of language says that you can't define a word any way you like, prompting me to join with allies to persuade him to clarify.](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/) [When that failed, my attempts to cope with the "rationalists" being fake led to a series of small misadventures culminating in Yudkowsky eventually clarifying the philosophy-of-language issue after I ran out of patience and yelled at him over email.](/2023/Dec/if-clarity-seems-like-death-to-them/) +Quickly recapping my Whole Dumb Story so far: [ever since puberty, I've had this obsessive sexual fantasy about being magically transformed into a woman, which got contextualized by these life-changing Sequences of blog posts by Eliezer Yudkowsky that taught me (amongst many other things) how fundamentally disconnected from reality my fantasy was.](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/) [So it came as a huge surprise when, around 2016, the "rationalist" community that had formed around the Sequences seemingly unanimously decided that guys like me might actually be women in some unspecified metaphysical sense.](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/) [A couple years later, having strenuously argued against the popular misconception that the matter could be resolved by simply redefining the word _woman_ (on the grounds that you can define the word any way you like), I flipped out when Yudkowsky prevaricated about how his own philosophy of language says that you can't define a word any way you like, prompting me to join with allies to persuade him to clarify.](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/) [When that failed, my attempts to cope with the "rationalists" being fake led to a series of small misadventures culminating in Yudkowsky eventually clarifying the philosophy-of-language issue after I ran out of patience and yelled at him over email.](/2023/Dec/if-clarity-seems-like-death-to-them/) Really, that should have been the end of the story—with a relatively happy ending, too: that it's possible to correct straightforward philosophical errors, at the cost of almost two years of desperate effort by someone with [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect). @@ -19,7 +19,7 @@ That wasn't the end of the story, which does not have such a relatively happy en ### The _New York Times_'s Other Shoe Drops (February 2021) -On 13 February 2021, ["Silicon Valley's Safe Space"](https://archive.ph/zW6oX), the _New York Times_ piece on _Slate Star Codex_, came out. It was ... pretty lame? (_Just_ lame, not a masterfully vicious hit piece.) Cade Metz did a mediocre job of explaining what our robot cult is about, while [pushing hard on the subtext](https://scottaaronson.blog/?p=5310) to make us look racist and sexist, occasionally resorting to odd constructions that were surprising to read from someone who had been a professional writer for decades. ("It was nominally a blog", Metz wrote of _Slate Star Codex_. ["Nominally"](https://en.wiktionary.org/wiki/nominally)?) The article's claim that Alexander "wrote in a wordy, often roundabout way that left many wondering what he really believed" seemed more like a critique of the many's reading comprehension than of Alexander's writing. +On 13 February 2021, ["Silicon Valley's Safe Space"](https://archive.ph/zW6oX), the [anticipated](/2023/Dec/if-clarity-seems-like-death-to-them/#the-new-york-times-pounces-june-2020) _New York Times_ piece on _Slate Star Codex_, came out. It was ... pretty lame? (_Just_ lame, not a masterfully vicious hit piece.) Cade Metz did a mediocre job of explaining what our robot cult is about, while [pushing hard on the subtext](https://scottaaronson.blog/?p=5310) to make us look racist and sexist, occasionally resorting to odd constructions that were surprising to read from someone who had been a professional writer for decades. ("It was nominally a blog", Metz wrote of _Slate Star Codex_. ["Nominally"](https://en.wiktionary.org/wiki/nominally)?) The article's claim that Alexander "wrote in a wordy, often roundabout way that left many wondering what he really believed" seemed more like a critique of the many's reading comprehension than of Alexander's writing. Although that poor reading comprehension may have served a protective function for Scott. A mob that attacks over things that look bad when quoted out of context can't attack you over the meaning of "wordy, often roundabout" text that they can't read. The _Times_ article included this sleazy guilt-by-association attempt: @@ -35,11 +35,11 @@ But Alexander only "aligned himself with Murray" in ["Three Great Articles On Po It _is_ a weirdly brazen invalid inference. But by calling it a "falsehood", Alexander heavily implies he disagrees with Murray's offensive views on race: in invalidating the _Times_'s charge of guilt by association with Murray, Alexander validates Murray's guilt. -But anyone who's read _and understood_ Alexander's work should be able to infer that Scott probably finds it plausible that there exist genetically mediated differences in socially relevant traits between ancestry groups (as a value-free matter of empirical science with no particular normative implications). For example, his [review of Judith Rich Harris](https://archive.ph/Zy3EL) indicates that he accepts the evidence from [twin studies](/2020/Apr/book-review-human-diversity/#twin-studies) for individual behavioral differences having a large genetic component, and section III. of his ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) indicates that he accepts genetics as an explanation for group differences in the particular case of Ashkenazi Jewish intelligence.[^murray-alignment] +But anyone who's read _and understood_ Alexander's work should be able to infer that Scott probably finds it plausible that there exist genetically mediated differences in socially relevant traits between ancestry groups (as a value-free matter of empirical science with no particular normative implications). For example, his [review of Judith Rich Harris on his old LiveJournal](https://archive.ph/Zy3EL) indicates that he accepts the evidence from [twin studies](/2020/Apr/book-review-human-diversity/#twin-studies) for individual behavioral differences having a large genetic component, and section III of his ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) indicates that he accepts genetics as an explanation for group differences in the particular case of Ashkenazi Jewish intelligence.[^murray-alignment] [^murray-alignment]: As far as aligning himself with Murray more generally, it's notable that Alexander had tapped Murray for Welfare Czar in [a hypothetical "If I were president" Tumblr post](https://archive.vn/xu7PX). -There are a lot of standard caveats that go here which Scott would no doubt scrupulously address if he ever chose to tackle the subject of genetically-mediated group differences in general: [the mere existence of a group difference in a "heritable" trait doesn't itself imply a genetic cause of the group difference (because the groups' environments could also be different)](/2020/Apr/book-review-human-diversity/#heritability-caveats). It is entirely conceivable that the Ashkenazi IQ advantage is real and genetic, but black–white IQ gap is fake and environmental.[^bet] Moreover, group averages are just that—averages. They don't imply anything about individuals and don't justify discrimination against individuals. +There are a lot of standard caveats that go here which Alexander would no doubt scrupulously address if he ever chose to tackle the subject of genetically-mediated group differences in general: [the mere existence of a group difference in a "heritable" trait doesn't imply a genetic cause of the group difference (because the groups' environments could also be different)](/2020/Apr/book-review-human-diversity/#heritability-caveats). It is entirely conceivable that the Ashkenazi IQ advantage is real and genetic, but black–white IQ gap is fake and environmental.[^bet] Moreover, group averages are just that—averages. They don't imply anything about individuals and don't justify discrimination against individuals. [^bet]: It's just—how much do you want to bet on that? How much do you think _Scott_ wants to bet? @@ -63,7 +63,7 @@ Incidental or not, the conflict is real, and everyone smart knows it—even if i So the _New York Times_ implicitly accuses us of being racists, like Charles Murray, and instead of pointing out that being a racist _like Charles Murray_ is the obviously correct position that sensible people will tend to reach in the course of being sensible, we disingenuously deny everything.[^deny-everything] -[^deny-everything]: In January 2023, when Nick Bostrom [preemptively apologized for a 26-year-old email to the Extropians mailing list](https://nickbostrom.com/oldemail.pdf) that referenced the IQ gap and mentioned a slur, he had [some](https://forum.effectivealtruism.org/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi) [detractors](https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old) and a [few](https://ea.greaterwrong.com/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi/comment/h9gdA4snagQf7bPDv) [defenders](https://forum.effectivealtruism.org/posts/NniTsDNQQo58hnxkr/my-thoughts-on-bostrom-s-apology-for-an-old-email), but I don't recall seeing anyone defending the 1996 email itself. +[^deny-everything]: In January 2023, when Nick Bostrom [preemptively apologized for a 26-year-old email to the Extropians mailing list](https://nickbostrom.com/oldemail.pdf) that referenced the IQ gap and mentioned a slur, he had [some](https://forum.effectivealtruism.org/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi) [detractors](https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old) and a [few](https://ea.greaterwrong.com/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi/comment/h9gdA4snagQf7bPDv) [defenders](https://forum.effectivealtruism.org/posts/NniTsDNQQo58hnxkr/my-thoughts-on-bostrom-s-apology-for-an-old-email), but I don't recall seeing much defense of the 1996 email itself. But if you're [familiar with the literature](/2020/Apr/book-review-human-diversity/#the-reason-everyone-and-her-dog-is-still-mad) and understand the [use–mention distinction](https://en.wikipedia.org/wiki/Use%E2%80%93mention_distinction), the literal claims in [the original email](https://nickbostrom.com/oldemail.pdf) are entirely reasonable. (There are additional things one could say about [what prosocial functions are being served by](/2020/Apr/book-review-human-diversity/#schelling-point-for-preventing-group-conflicts) the taboos against what the younger Bostrom called "the provocativeness of unabashed objectivity", which would make for fine mailing-list replies, but the original email can't be abhorrent simply for failing to anticipate all possible counterarguments.) @@ -77,7 +77,7 @@ As it happens, in our world, the defensive cover-up consists of _throwing me und But _trans!_ We have plenty of those! In [the same blog post in which Scott Alexander characterized rationalism as the belief that Eliezer Yudkowsky is the rightful caliph](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), he also named "don't misgender trans people" as one of the group's distinguishing norms. Two years later, he joked that ["We are solving the gender ratio issue one transition at a time"](https://slatestarscratchpad.tumblr.com/post/142995164286/i-was-at-a-slate-star-codex-meetup). -The benefit of having plenty of trans people is that high-ranking members of the [progressive stack](https://en.wikipedia.org/wiki/Progressive_stack) can be trotted out as a shield to prove that we're not counterrevolutionary right-wing Bad Guys. Thus, [Jacob Falkovich noted](https://twitter.com/yashkaf/status/1275524303430262790) (on 23 June 2020, just after _Slate Star Codex_ went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders", and Scott Aaronson [noted (in commentary on the February 2021 _Times_ article) that](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" should have "complicated the picture" of our portrayal as anti-feminist. +The benefit of having plenty of trans people is that high-ranking members of the [progressive stack](https://en.wikipedia.org/wiki/Progressive_stack) can be trotted out as a shield to prove that we're not counterrevolutionary right-wing Bad Guys. Thus, [Jacob Falkovich noted](https://twitter.com/yashkaf/status/1275524303430262790) (on 23 June 2020, just after _Slate Star Codex_ went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders", and Scott Aaronson [noted (in commentary on the February 2021 _New York Times_ article) that](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" should have "complicated the picture" of our portrayal as anti-feminist. Even the haters grudgingly give Alexander credit for ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/): "I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least acknowledging you were wrong", [wrote one](https://archive.is/SlJo1). @@ -101,7 +101,7 @@ The same day, Yudkowsky published [a Facebook post](https://www.facebook.com/yud I was annoyed at how the discussion seemed to be ignoring the obvious political angle, and the next day, 18 February 2021, I wrote [a widely Liked comment](/images/davis-why_they_say_they_hate_us.png): I agreed that there was a grain of truth to the claim that our detractors hate us because they're evil bullies, but stopping the analysis there seemed incredibly shallow and transparently self-serving. -If you listened to why _they_ said they hated us, it was because we were racist, sexist, transphobic fascists. The party-line response seemed to be trending toward, "That's obviously false—Scott voted for Warren, look at all the social democrats on the _Less Wrong_/_Slate Star Codex_ surveys, _&c._ They're just using that as a convenient smear because they like bullying nerds." +If you listened to why _they_ said they hated us, it was because we were racist, sexist, transphobic fascists. The party-line response seemed to be trending toward, "That's obviously false—Scott voted for Elizabeth Warren, look at all the social democrats on the _Less Wrong_/_Slate Star Codex_ surveys, _&c._ They're just using that as a convenient smear because they like bullying nerds." But if "sexism" included "It's an empirical question whether innate statistical psychological sex differences of some magnitude exist, it empirically looks like they do, and this has implications about our social world" (as articulated in, for example, Alexander's ["Contra Grant on Exaggerated Differences"](https://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/)), then the "_Slate Star Codex_ _et al._ are crypto-sexists" charge was absolutely correct. (Crypto-racist, crypto-fascist, _&c._ left as an exercise for the reader.) @@ -115,9 +115,9 @@ Here, I don't think Scott has anything to be ashamed of—but that's because I d Yudkowsky [replied that](/images/yudkowsky-we_need_to_exclude_evil_bullies.png) everyone had a problem of figuring out how to exclude evil bullies. We also had an inevitable [Kolmogorov complicity](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) problem, but that shouldn't be confused with the evil bullies issue, even if bullies attack via Kolmogorov issues. -I'll agree that the problems shouldn't be confused. Psychology is complicated, and people have more than one reason for doing things: I can easily believe that Brennan was largely driven by bully-like motives even if he told himself a story about being a valiant whistleblower defending Cade Metz's honor against Scott's deception. +I'll agree that the problems shouldn't be confused. I can easily believe that Brennan was largely driven by bully-like motives even if he told himself a story about being a valiant whistleblower defending Cade Metz's honor against Scott's deception. -But I think it's important to notice both problems, instead of pretending that the only problem was Brennan's disregard for Alexander's privacy. It's one thing to believe that people should keep promises that they, themselves, explicitly made. But instructing commenters not to link to the email seems to suggest not just that Brennan should keep _his_ promises, but that everyone else should to participate in a conspiracy to conceal information that Alexander would prefer concealed. I can see an ethical case for it, analogous to returning stolen property after it's already been sold and expecting buyers not to buy items that they know have been stolen. (If Brennan had obeyed Alexander's confidentiality demand, we wouldn't have an email to link to, so if we wish Brennan had obeyed, we can just act as if we don't have an email to link to.) +But I think it's important to notice both problems, instead of pretending that the only problem was Brennan's disregard for Alexander's privacy. It's one thing to believe that people should keep promises that they, themselves, explicitly made. But instructing commenters not to link to the email seems to suggest not just that Brennan should keep _his_ promises, but that everyone else should participate in a conspiracy to conceal information that Alexander would prefer concealed. I can see an ethical case for it, analogous to returning stolen property after it's already been sold and expecting buyers not to buy items that they know have been stolen. (If Brennan had obeyed Alexander's confidentiality demand, we wouldn't have an email to link to, so if we wish Brennan had obeyed, we can just act as if we don't have an email to link to.) But there's also a non-evil-bully case for wanting to reveal information, rather than participate in a cover-up to protect the image of the "rationalists" as non-threatening to the progressive egregore. If the orchestrators of the cover-up can't even acknowledge to themselves that they're orchestrating a cover-up, they're liable to be confusing themselves about other things, too. @@ -125,7 +125,7 @@ As it happened, I had another social media interaction with Yudkowsky that same > Hypothesis: People to whom self-awareness and introspection come naturally, put way too much moral exculpatory weight on "But what if they don't know they're lying?" They don't know a lot of their internals! And don't want to know! That's just how they roll. -In reply, Michael Vassar tagged me. "Michael, I thought you weren't talking to me [(after my failures of 18–19 December)](/2023/Dec/if-clarity-seems-like-death-to-them/#a-dramatic-episode-that-would-fit-here-chronologically)?" [I said](https://twitter.com/zackmdavis/status/1362549606538641413). "But yeah, I wrote a couple blog posts about this thing", linking to ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist) and ["Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle"](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) +In reply, Michael Vassar tagged me. "Michael, I thought you weren't talking to me [(after my failures of 18–19 December)](/2023/Dec/if-clarity-seems-like-death-to-them/#a-private-catastrophe-december-2020)?" [I said](https://twitter.com/zackmdavis/status/1362549606538641413). "But yeah, I wrote a couple blog posts about this thing", linking to ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist) and ["Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle"](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) After a few moments, I decided it was better if I [explained the significance of Michael tagging me](https://twitter.com/zackmdavis/status/1362555980232282113): @@ -137,7 +137,7 @@ And I think I _would_ have been over it ... (Why!? Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging? Did my highly-Liked Facebook comment and Twitter barb about him lying by implicature temporarily bring my concerns to the top of his attention, despite the fact that I'm generally not that important?) -### Reasons Someone Does Not Like to Be Tossed Into a Male Bucket or Female Bucket +### Yudkowsky Doubles Down (February 2021) I eventually explained what was wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/),[^challenges-title] but that post focused on the object-level arguments; I have more to say here (that I decided to cut from "Challenges") about the meta-level political context. The February 2021 post on pronouns is a fascinating document, in its own way—a penetrating case study on the effects of politics on a formerly great mind. @@ -177,7 +177,7 @@ When piously appealing to the feelings of people describing reasons they do not I agree that a language convention in which pronouns map to hair color doesn't seem great. The people in this world should probably coordinate on switching to a better convention, if they can figure out how. -But taking this convention as given, a demand to be referred to as having a hair color _that one does not have_ seems outrageous to me! +But taking the convention as given, a demand to be referred to as having a hair color _that one does not have_ seems outrageous to me! It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of genuine nuance brought on by a genuine complication to a system that _falsely_ assumes discrete hair colors. @@ -207,9 +207,9 @@ I knew better than to behave like that. My failure didn't mean I had been wrong Someone who uncritically validated my dislike of the Student Bucket rather than assessing my reasons, would be hurting me, not helping me—because in order to navigate the real world, I need a map that reflects the territory, not a map that reflects my narcissistic fantasies. I'm a better person for straightforwardly facing the shame of getting a _C_ in community college differential equations, rather than denying it or claiming that it didn't mean anything. Part of updating myself incrementally was that I would get _other_ chances to prove that my autodidacticism could match the standard set by schools, even if it hadn't that time. (My professional and open-source programming career obviously does not owe itself to the two Java courses I took at community college. When I audited honors analysis at UC Berkeley "for fun" in 2017, I did fine on the midterm. When I interviewed for a new dayjob in 2018, the interviewer, noting my lack of a degree, said he was going to give a version of the interview without a computer science theory question. I insisted on the "college" version of the interview, solved a dynamic programming problem, and got the job. And so on.) -If you can see why uncritically affirming people's current self-image isn't the solution to "student dysphoria", it should be obvious why the same applies to gender dysphoria. There's a general underlying principle: it matters whether that self-image is true. +If you can see why uncritically affirming people's current self-image isn't the solution to "student dysphoria", it should be clear why the same applies to gender dysphoria. There's a general underlying principle: it matters whether that self-image is true. -In an article titled ["Actually, I Was Just Crazy the Whole Time"](https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole), FtMtF detransitioner Michelle Alleva contrasts her current beliefs with those when she decided to transition. While transitioning, she accounted for many pieces of evidence about herself ("dislikes attention as a female", "obsessive thinking about gender", "doesn't fit in with the girls", _&c_.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover the original list: "It's because I'm autistic," "It's because I have unresolved trauma," "It's because women are often treated poorly" ... including "That wasn't entirely true" (!!). +In an article titled ["Actually, I Was Just Crazy the Whole Time"](https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole), FtMtF detransitioner Michelle Alleva contrasts her current beliefs with those when she decided to transition. While transitioning, she accounted for many pieces of evidence about herself ("dislikes attention as a female", "obsessive thinking about gender", "doesn't fit in with the girls", _&c_.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover the original list: "It's because I'm autistic," "It's because I have unresolved trauma," "It's because women are often treated poorly" ... including "That wasn't entirely true" (!). This is a rationality skill. Alleva had a theory about herself, which she revised upon further consideration of the evidence. Beliefs about oneself aren't special and can—must—be updated using the _same_ methods that you would use to reason about anything else—[just as a recursively self-improving AI would reason the same about transistors "inside" the AI and transistors in "the environment."](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection)[^the-form-of-the-inference] @@ -231,9 +231,9 @@ It would seem that in the current year, that culture is dead—or if it has any At this point, some readers might protest that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post also explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I agree that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he meant to communicate the reading that does make sense, rather than the reading that doesn't make sense? -I reply: _given that the author is Eliezer Yudkowsky_—no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just too talented a writer for me to excuse his words as accidentally unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about despite someone's "not lik[ing] to be tossed into a Male Bucket or Female Bucket", I think it's deliberately ambiguous. +I reply: _given that the author is Eliezer Yudkowsky_—no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just too talented a writer for me to excuse his words as accidentally unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about despite someone's "not lik[ing] to be tossed into a Male Bucket or Female Bucket", I think it's ambiguous for a reason. -When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some channel other than their words straightforwardly reflecting reality. Someone who was actually stupid wouldn't be able to generate text so carefully fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point is to pander to the biological sex denialists in his robot cult without technically saying anything unambiguously false that someone could call out as a "lie." +When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some channel other than their words straightforwardly reflecting reality. Someone who was actually stupid wouldn't be able to generate text so carefully fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point is to pander to biological sex denialists without technically saying anything unambiguously false that someone could call out as a "lie." On a close reading of the comment section, we see hints that Yudkowsky does not obviously disagree with this interpretation of his behavior? First, we get [a disclaimer comment](/images/yudkowsky-the_disclaimer.png): @@ -245,17 +245,17 @@ On a close reading of the comment section, we see hints that Yudkowsky does not > > But the existence of a wide social filter like that should be kept in mind; to whatever quantitative extent you don't trust your ability plus my ability to think of valid counterarguments that might exist, as a Bayesian you should proportionally update in the direction of the unknown arguments you speculate might have been filtered out. -The explanation of [the problem of political censorship filtering evidence](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) here is great, but the part where Yudkowsky claims "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter" is laughable. My point (articulated at length in ["Challenges"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/)) is obvious (that _she_ and _he_ have existing meanings that you can't just ignore, given that the existing meanings are what motivate people to ask for new pronouns in the first place). +The explanation of [the problem of political censorship filtering evidence](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) here is great, but the part where Yudkowsky claims "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter" is laughable. The point I articulated at length in ["Challenges"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/)) (that _she_ and _he_ have existing meanings that you can't just ignore, given that the existing meanings are what motivate people to ask for new pronouns in the first place) is obvious. -Really, it would be less embarrassing for Yudkowsky if he were lying about having tried to think of counterarguments. The original post isn't that bad if you assume that Yudkowsky was writing off the cuff, that he just didn't put any effort into thinking about why someone might disagree. I don't have a problem with selective argumentation that's clearly labeled as such: there's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive or even-handed; if you want other perspectives, you'll have to read other authors and think it through for yourself." +It would arguably be less embarrassing for Yudkowsky if he were lying about having tried to think of counterarguments. The original post isn't that bad if you assume that Yudkowsky was writing off the cuff, that he just didn't put any effort into thinking about why someone might disagree. I don't have a problem with selective argumentation that's clearly labeled as such: there's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive or even-handed; if you want other perspectives, you'll have to read other authors and think it through for yourself." -But if he _did_ put in the effort to aspire to even-handedness—enough that he felt comfortable bragging about his ability to see the other side of the argument—and still ended up proclaiming his "simplest and best protocol" without mentioning any of its obvious costs, that's discrediting. If Yudkowsky's ability to explore the space of arguments is that bad, why would you trust his opinion about anything? +But if he _did_ put in the effort to aspire to [the virtue of evenness](https://www.readthesequences.com/The-Twelve-Virtues-Of-Rationality)—enough that he felt comfortable bragging about his ability to see the other side of the argument—and still ended up proclaiming his "simplest and best protocol" without mentioning any of its obvious costs, that's discrediting. If Yudkowsky's ability to explore the space of arguments is that bad, why would you trust his opinion about anything? -Furthermore, the claim that only I "would have said anything where you could hear it" is also discrediting of the community. Transitioning or not is a _major life decision_ for many of the people in this community. People in this community _need the goddamned right answers_ to the questions I've been asking in order to make that kind of life decision sanely [(whatever the sane decisions turn out to be)](/2021/Sep/i-dont-do-policy/). If the community is so bad at exploring the space of arguments that I'm the only one who can talk about the obvious decision-relevant considerations that code as "anti-trans" when you project into the one-dimensional subspace corresponding to our Society's usual Culture War, why would you pay attention to the community _at all_? Insofar as the community is successfully marketing itself to promising young minds as the uniquely best place in the world for reasoning and sensemaking, then "the community" is _fraudulent_ (misleading people about what it has to offer in a way that's optimized to move resources to itself). It needs to either rebrand—or failing that, disband—or failing that, _be destroyed_. +Furthermore, the claim that only I "would have said anything where you could hear it" is also discrediting of the community. Transitioning or not is a _major life decision_ for many of the people in this community. People in this community _need the goddamned right answers_ to the questions I've been asking in order to make that kind of life decision sanely [(whatever the sane decisions turn out to be)](/2021/Sep/i-dont-do-policy/). If the community is so bad at exploring the space of arguments that I'm the only one who can talk about the obvious decision-relevant considerations that code as "anti-trans" when you project into the one-dimensional subspace corresponding to our Society's usual culture war, why would you pay attention to the community _at all_? Insofar as the community is successfully marketing itself to promising young minds as the uniquely best place in the world for reasoning and sensemaking, then "the community" is _fraudulent_ (misleading people about what it has to offer in a way that's optimized to move resources to itself). It needs to either rebrand—or failing that, disband—or failing that, _be destroyed_. The "where you could hear it" clause is particularly bizarre—as if Yudkowsky assumes that people in "the community" _don't read widely_. It's gratifying to be acknowledged by my caliph—or it would be, if he were still my caliph—but I don't think the points I've been making, about the relevance of autogynephilia to male-to-female transsexualism, and the reality of biological sex (!), are particularly novel. -I think I _am_ unusual in the amount of analytical rigor I can bring to bear on these topics. Similar points are often made by authors such as [Kathleen Stock](https://en.wikipedia.org/wiki/Kathleen_Stock) or [Corinna Cohn](https://corinnacohn.substack.com/) or [Aaron Terrell](https://aaronterrell.substack.com/)—or, for that matter, [Steve Sailer](https://www.unz.com/isteve/dont-mention-the-autogynephilia/)—but those authors don't have the background to formulate it [in the language of probabilistic graphical models](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/) the way I do. _That_ part is a genuine value-add of the "rationalist" memeplex—something I wouldn't have been able to do without [the influence of Yudkowsky's Sequences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), and all the math books I studied afterwards because the vibe of the _Overcoming Bias_ comment section in 2008 made that seem like an important and high-status thing to do. +I think I _am_ unusual in the amount of analytical rigor I can bring to bear on these topics. Similar points are often made by authors such as [Kathleen Stock](https://en.wikipedia.org/wiki/Kathleen_Stock) or [Corinna Cohn](https://corinnacohn.substack.com/) or [Aaron Terrell](https://aaronterrell.substack.com/)—or for that matter [Steve Sailer](https://www.unz.com/isteve/dont-mention-the-autogynephilia/)—but those authors don't have the background to formulate it [in the language of probabilistic graphical models](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/) the way I do. _That_ part is a genuine value-add of the "rationalist" memeplex—something I wouldn't have been able to do without [the influence of Yudkowsky's Sequences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/), and all the math books I studied afterwards because the vibe of the _Overcoming Bias_ comment section in 2008 made that seem like an important and high-status thing to do. But the promise of the Sequences was in offering a discipline of thought that could be applied to everything you would have read and thought about anyway. This notion that if someone in the community didn't say something, then Yudkowsky's faithful students wouldn't be able to hear it, would have been rightfully seen as absurd: _Overcoming Bias_ was a gem of the blogoshere, not a substitute for the rest of it. (Nor was the blogosphere a substitute for the University library, which escaped the autodidact's [resentment of the tyranny of schools](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/) by [selling borrowing privileges to the public for $100 a year](https://www.lib.berkeley.edu/about/access-library-collections-by-external-users).) To the extent that the Yudkowsky of the current year takes for granted that his faithful students _don't read Steve Sailer_, he should notice that he's running a cult or a fandom rather than an intellectual community. @@ -281,11 +281,11 @@ But if I'm right that (a′) and (b′) should be live hypotheses and that Yudko > > Trying to pack all of that into the pronouns you'd have to use in step 1 is the wrong place to pack it. -Sure, if we were designing a constructed language from scratch under current social conditions, in which a person's "gender" is understood as a contested social construct rather than their sex being an objective and undisputed fact, then yeah: in that situation _which we are not in_, you definitely wouldn't want to pack sex or gender into pronouns. But it's a disingenuous derailing tactic to grandstand about how people need to alter the semantics of their existing native language so that we can discuss the real issues under an allegedly superior pronoun convention when by your own admission, you have _no intention whatsoever of discussing the real issues!_ +Sure, if we were designing a constructed language from scratch with the understanding that a person's "gender" is a contested social construct rather than their sex being an objective and undisputed fact, then yes: in that situation _which we are not in_, you definitely wouldn't want to pack sex or gender into pronouns. But it's a disingenuous derailing tactic to grandstand about how people need to alter the semantics of their existing native language so that we can discuss the real issues under an allegedly superior pronoun convention when, by your own admission, you have _no intention whatsoever of discussing the real issues!_ (Lest the "by your own admission" clause seem too accusatory, I should note that given constant behavior, admitting it is much better than not admitting it, so huge thanks to Yudkowsky for the transparency on this point!) -Again, [as discussed in "Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/#t-v-distinction), there's an instructive comparison to languages that have formality-based second person pronouns, like [_tú_ and _usted_ in Spanish](https://en.wikipedia.org/wiki/Spanish_personal_pronouns#T%C3%BA/vos_and_usted). It's one thing to advocate for collapsing the distinction and just settling on one second-person singular pronoun for the Spanish language. That's principled. +[As discussed in "Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/#t-v-distinction), there's an instructive comparison to languages that have formality-based second person pronouns, like [_tú_ and _usted_ in Spanish](https://en.wikipedia.org/wiki/Spanish_personal_pronouns#T%C3%BA/vos_and_usted). It's one thing to advocate for collapsing the distinction and just settling on one second-person singular pronoun for the Spanish language. That's principled. It's another thing altogether to try to prevent a speaker from using _tú_ to indicate disrespect towards a social superior (on the stated rationale that the _tú_/_usted_ distinction is dumb and shouldn't exist) while also refusing to entertain the speaker's arguments for why their interlocutor is unworthy of the deference that would be implied by _usted_ (because such arguments are "unspeakable" for political reasons). @@ -325,7 +325,7 @@ The campaign manager is in possession of a survey of mayoral candidates on which The post then briefly discusses the idea of a "logical" argument, one whose conclusions follow from its premises. "All rectangles are quadrilaterals; all squares are quadrilaterals; therefore, all squares are rectangles" is given as an example of illogical argument, even though both premises are true (all rectangles and squares are in fact quadrilaterals) and the conclusion is true (all squares are in fact rectangles). The problem is that the conclusion doesn't follow from the premises; the reason all squares are rectangles isn't _because_ they're both quadrilaterals. If we accepted arguments of the general form "all A are C; all B are C; therefore all A are B", we would end up believing nonsense. -Yudkowsky's conception of a "rational" argument—at least, Yudkowsky's conception in 2007, which the Yudkowsky of the current year seems to disagree with—has a similar flavor: the stated reasons should be the actual reasons. The post concludes: +Yudkowsky's conception of a "rational" argument—at least, Yudkowsky's conception in 2007, which the Yudkowsky of the current year seems to disagree with—has a similar flavor: the stated reasons should be the real reasons. The post concludes: > If you really want to present an honest, rational argument _for your candidate_, in a political campaign, there is only one way to do it: > @@ -350,47 +350,27 @@ I just—would have hoped that abandoning the intellectual legacy of his Sequenc Michael Vassar [said](https://twitter.com/HiFromMichaelV/status/1221771020534788098), "Rationalism starts with the belief that arguments aren't soldiers, and ends with the belief that soldiers are arguments." By accepting that soldiers are arguments ("I don't see what the alternative is besides getting shot"), Yudkowsky is accepting the end of rationalism in this sense. If the price you put on the intellectual integrity of your so-called "rationalist" community is similar to that of the Snodgrass for Mayor campaign, you shouldn't be surprised if intelligent, discerning people accord similar levels of credibility to the two groups' output. -[I tend to be hesitant to use the term "bad faith"](https://www.lesswrong.com/posts/e4GBj6jxRZcsHFSvP/assume-bad-faith), because I see it thrown around more than I think people know what it means, but it fits here. "Bad faith" doesn't mean "with ill intent", and it's more specific than "dishonest": it's [adopting the surface appearance of being moved by one set of motivations, while acting from another](https://en.wikipedia.org/wiki/Bad_faith). - -For example, an [insurance adjuster](https://en.wikipedia.org/wiki/Claims_adjuster) who goes through the motions of investigating your claim while privately intending to deny it might never consciously tell an explicit "lie", but is acting in bad faith: they're asking you questions, demanding evidence, _&c._ to make it look like you'll get paid if you prove the loss occurred—whereas in reality, you're just not going to be paid. Your responses to the claim inspector aren't casually inert: if you can make an extremely strong case that the loss occurred as you say, then the claim inspector might need to put effort into coming up with an ingenious excuse to deny your claim, in ways that exhibit general claim-inspection principles. But ultimately, the inspector is going to say what they need to say in order to protect the company's loss ratio, as is sometimes personally prudent. - -With this understanding of bad faith, we can read Yudkowsky's "it is sometimes personally prudent [...]" comment as admitting that his behavior on politically charged topics is in bad faith—where "bad faith" isn't a meaningless dismissal, but [literally refers](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/) to the behavior of pretending to different motivations than one does, such that accusations of bad faith can be true or false. Yudkowsky will [take care not to consciously tell an explicit "lie"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), while going through the motions to make it look like he's genuinely engaging with questions where I need the right answers in order to make extremely impactful social and medical decisions—whereas in reality, he's only going to address a selected subset of the relevant evidence and arguments that won't get him in trouble with progressives. - -To his credit, he will admit that he's only willing to address a selected subset of arguments—but while doing so, he claims an absurd "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking" while blatantly mischaracterizing his opponents' beliefs! ("Gendered Pronouns for Everyone and Asking To Leave the System Is Lying" doesn't pass anyone's [ideological Turing test](https://www.econlib.org/archives/2011/06/the_ideological.html).) - -Counterarguments aren't completely causally inert: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity (Such That Some People Perceive an Interest in Using Language Accordingly), Yudkowsky will put some effort into coming up with some ingenious excuse to dodge your claim, in ways that exhibit generally rationalist principles. Ultimately, Yudkowsky is going to say what he needs to say in order to protect his reputation with progressives, as is sometimes personally prudent. - -Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that he's making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior—maybe there are circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from mine, I sometimes anticipate that if my interlocutor knew what I was thinking, they wouldn't want to talk to me, so I word my replies so that I [seem more ideologically aligned with them than I actually am](https://geekfeminism.fandom.com/wiki/Concern_troll). (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the minimal amount of strategic bad faith needed to keep the conversation going—to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. I'm willing to defend this behavior. There _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my alignment. - -That is, my bad faith concern-trolling gambit of misrepresenting my ideological alignment to improve the discussion seems beneficial to the accuracy of our collective beliefs about the topic. (And the topic is presumably of greater collective interest than which "side" I personally happen to be on.) - -In contrast, the "it is sometimes personally prudent [...] to post your agreement with Stalin" gambit is the exact reverse: it's _introducing_ a distortion into the discussion in the hopes of correcting people's beliefs about the speaker's ideological alignment. (Yudkowsky is not a right-wing Bad Guy, but people would tar him as one if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic; it's a _pure_ ass-covering move. - -Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. Zvi Mowshowitz has [written about how the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is used to justify deception: if "everybody knows" that we can't talk about biological sex, then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex even when it's extremely relevant. +Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. But [as Zvi Mowshowitz points out, the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is typically used to justify deception: if "everybody knows" that we can't talk about biological sex, then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex even when it's extremely relevant. But if everybody knew, then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the appeal of censorship to dictators like Stalin is precisely that _not_ everybody knows and that someone with power wants to keep it that way. For the savvy people in the know, it would certainly be convenient if everyone secretly knew: then the savvy people wouldn't have to face the tough choice between acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath). -[Policy debates should not appear one-sided.](https://www.lesswrong.com/posts/PeSzc9JTBxhaYRp9b/policy-debates-should-not-appear-one-sided) Faced with this dilemma, I can't say that defying Power is necessarily the right choice: if there really were no options besides deceiving your readers with a bad-faith performance and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe the bad-faith performance is the right thing to do. +[Policy debates should not appear one-sided.](https://www.lesswrong.com/posts/PeSzc9JTBxhaYRp9b/policy-debates-should-not-appear-one-sided) Faced with this dilemma, I can't say that defying Power is necessarily the right choice: if there really were no options besides deceiving your readers and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe deceiving your readers is the right thing to do. But if you cared about not deceiving your readers, you would want to be sure that those _really were_ the only two options. You'd [spend five minutes by the clock looking for third alternatives](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative)—including, possibly, not issuing proclamations on your honor as leader of the so-called "rationalist" community on topics where you _explicitly intend to ignore politically unfavorable counterarguments_. Yudkowsky rejects this alternative on the grounds that it allegedly implies "utter silence about everything Stalin has expressed an opinion on including '2 + 2 = 4' because if that logically counterfactually were wrong you would not be able to express an opposing opinion". I think he's playing dumb here. In other contexts, he's written about ["attack[s] performed by selectively reporting true information"](https://twitter.com/ESYudkowsky/status/1634338145016909824) and ["[s]tatements which are technically true but which deceive the listener into forming further beliefs which are false"](https://hpmor.com/chapter/97). He's undoubtedly familiar with the motte-and-bailey doctrine as [described by Nicholas Shackel](https://philpapers.org/archive/SHATVO-2.pdf) and [popularized by Scott Alexander](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/). I think that if he wanted to, Eliezer Yudkowsky could think of some relevant differences between "2 + 2 = 4" and "the simplest and best protocol is, "_He_ refers to the set of people who have asked us to use _he_". -If you think it's "sometimes personally prudent and not community-harmful" to go out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you live in a red state and "don't see what the alternative is besides getting shot"), you can see why people might regard you as a Republican shill, even if all the things you said were true. If you tried to defend yourself against the charge of being a Republican shill by pointing out that you've never told any specific individual, "You should vote Republican," that's a nice motte, but you shouldn't expect devoted rationalists to fall for it. +If you think it's "sometimes personally prudent and not community-harmful" to go out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you live in a red state and "don't see what the alternative is besides getting shot"), you can see why people might regard you as a Republican shill, even if each sentence you said was true. If you tried to defend yourself against the charge of being a Republican shill by pointing out that you've never told any specific individual, "You should vote Republican," that's a nice motte, but you shouldn't expect devoted rationalists to fall for it. Similarly, when Yudkowsky [wrote in June 2021](https://twitter.com/ESYudkowsky/status/1404697716689489921), "I have never in my own life tried to persuade anyone to go trans (or not go trans)—I don't imagine myself to understand others that much", it was a great motte. I don't doubt the literal motte stated literally. -And yet it seems worth noticing that shortly after proclaiming in March 2016 that he was "over 50% probability at this point that at least 20% of the ones with penises are actually women", he made [a followup post celebrating having caused someone's transition](https://www.facebook.com/yudkowsky/posts/10154110278349228): +And yet it seems worth noting that shortly after proclaiming in March 2016 that he was "over 50% probability at this point that at least 20% of the ones with penises are actually women", he made [a followup post celebrating having caused someone's transition](https://www.facebook.com/yudkowsky/posts/10154110278349228): > Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl." > ==DOES ALL OF THE HAPPY DANCE FOREVER== -In the comments, he added: - -> Atheists: 1000+ Anorgasmia: 2 Trans: 1 - He [later clarified on Twitter](https://twitter.com/ESYudkowsky/status/1404821285276774403), "It is not trans-specific. When people tell me I helped them, I mostly believe them and am happy." But if Stalin is committed to convincing gender-dysphoric males that they need to cut their dicks off, and you're committed to not disagreeing with Stalin, you _shouldn't_ mostly believe it when gender-dysphoric males thank you for providing the final piece of evidence they needed to realize that they need to cut their dicks off, for the same reason a self-aware Republican shill shouldn't uncritically believe it when people thank him for warning them against Democrat treachery. We know—he's told us very clearly—that Yudkowsky isn't trying to be a neutral purveyor of decision-relevant information on this topic. He's playing on a different chessboard. @@ -419,7 +399,7 @@ It is genuinely sad that the author of those Tweets didn't get perceived in the _It was a compliment!_ That receptionist was almost certainly thinking of someone like [David Bowie](https://en.wikipedia.org/wiki/David_Bowie) or [Eddie Izzard](https://en.wikipedia.org/wiki/Eddie_Izzard), rather than being hateful. The author should have graciously accepted the compliment and _done something to pass better next time_. The horror of trans culture is that it's impossible to imagine any of these people doing that—noticing that they're behaving like a TERF's [hostile](/2019/Dec/the-strategy-of-stigmatization/) [stereotype](/2022/Feb/link-never-smile-at-an-autogynephile/) of a narcissistic, gaslighting trans-identified man and snapping out of it. -I want a shared cultural understanding that the way to ameliorate the sadness of people who aren't being perceived how they prefer is through things like better and cheaper facial feminization surgery, not [emotionally blackmailing](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) people out of their ability to report what they see. I don't _want_ to relinquish [my ability to notice what women's faces look like](/papers/bruce_et_al-sex_discrimination_how_do_we_tell.pdf), even if that means noticing that mine isn't one. I can endure being sad about that if the alternative is forcing everyone to doublethink around their perceptions of me. +In a sane world, people would understand that the way to ameliorate the sadness of people who aren't being perceived how they prefer is through things like better and cheaper facial feminization surgery, not [emotionally blackmailing](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) people out of their ability to report what they see. I don't _want_ to relinquish [my ability to notice what women's faces look like](/papers/bruce_et_al-sex_discrimination_how_do_we_tell.pdf), even if that means noticing that mine isn't one. I can endure being sad about that if the alternative is forcing everyone to doublethink around their perceptions of me. In a world where surgery is expensive, but some people desperately want to change sex and other people want to be nice to them, there are incentives to relocate our shared concept of "gender" onto things like [ornamental clothing](http://web.archive.org/web/20210513192331/http://thetranswidow.com/2021/02/18/womens-clothing-is-always-drag-even-on-women/) that are easier to change than secondary sex characteristics. @@ -453,9 +433,9 @@ It was a good post! Yudkowsky was merely using the sex change example to illustr But seven years later, in a March 2016 Facebook post, Yudkowsky [proclaimed that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix, I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women." -This seemed like a huge and surprising reversal from the position articulated in "Changing Emotions". The two posts weren't _necessarily_ inconsistent, if you assumed gender identity is a real property synonymous with "brain sex", and that the harsh (almost mocking) skepticism of the idea of true male-to-female sex change in "Changing Emotions" was directed at the erotic sex-change fantasies of cis men (with a male gender-identity/brain-sex), whereas the 2016 Facebook post was about trans women (with a female gender-identity/brain-sex), which are a different thing. +This seemed like a huge and surprising reversal from the position articulated in "Changing Emotions". The two posts weren't _necessarily_ inconsistent, if you assumed gender identity is a real property synonymous with "brain sex", and that the harsh (almost mocking) skepticism of the idea of true male-to-female sex change in "Changing Emotions" was directed at the sex-change fantasies of cis men (with a male gender-identity/brain-sex), whereas the 2016 Facebook post was about trans women (with a female gender-identity/brain-sex), which are a different thing. -But this potential unification seemed dubious to me, especially if trans women were purported to be "at least 20% of the ones with penises" (!!) in some population. After it's been pointed out, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. So in October 2016, [I wrote to Yudkowsky noting the apparent reversal and asking to talk about it](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price). Because of the privacy rules I'm adhering to in telling this Whole Dumb Story, I can't confirm or deny whether any such conversation occurred. +But this potential unification seemed dubious to me, especially if trans women were purported to be "at least 20% of the ones with penises" (!) in some population. After it's been pointed out, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. So in October 2016, [I wrote to Yudkowsky noting the apparent reversal and asking to talk about it](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price). Because of the privacy rules I'm adhering to in telling this Whole Dumb Story, I can't confirm or deny whether any such conversation occurred. Then, in November 2018, while criticizing people who refuse to use trans people's preferred pronouns, Yudkowsky proclaimed that "Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying" and that "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning". But _that_ seemed like a huge and surprising reversal from the position articulated in ["37 Ways Words Can Be Wrong"](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). After attempts to clarify via email failed, I eventually wrote ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) to explain the relevant error in general terms, and Yudkowsky eventually [clarified his position in September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228). @@ -471,7 +451,7 @@ On "his turn", he comes up with some pompous proclamation that's obviously optim On "my turn", I put in an absurd amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically making any unambiguously false atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was substantively misleading compared to what any serious person would say if they were trying to make sense of the world without worrying what progressive activists would think of them. -At the start, I never expected to end up arguing about the minutiae of pronoun conventions, which no one would care about if contingencies of the English language hadn't made them a Schelling point for things people do care about. The conversation only ended up here after a series of derailings. At the start, I was trying to say something substantive about the psychology of straight men who wish they were women. +At the start, I never expected to end up arguing about the minutiæ of pronoun conventions, which no one would care about if contingencies of the English language hadn't made them a Schelling point for things people do care about. The conversation only ended up here after a series of derailings. At the start, I was trying to say something substantive about the psychology of straight men who wish they were women. In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons. @@ -481,9 +461,7 @@ It's the same thing with Yudkowsky's political risk minimization subject to the Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. -Accordingly, I tried the object-level good-faith argument thing first. I tried it for _years_. But at some point, I should be allowed to notice the nearest-unblocked-strategy game which is obviously happening. I think there's some number of years and some number of thousands of words[^wordcounts] of litigating the object level (about gender) and the meta level (about the philosophy of categorization) after which there's nothing left to do but jump up to the meta-meta level of politics and explain, to anyone capable of hearing it, why I think I've accumulated enough evidence for the assumption of good faith to have been empirically falsified.[^symmetrically-not-assuming-good-faith] - -[^wordcounts]: ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (2018), ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (2019), and ["Unnatural Categories Are Optimized for Deception"](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception) (2021) total to over 20,000 words. +Accordingly, I tried the object-level good-faith argument thing first. I tried it for _years_. But at some point, I should be allowed to notice the nearest-unblocked-strategy game which is obviously happening. I think there's some number of years and some number of thousands of words of litigating the object level (about gender) and the meta level (about the philosophy of categorization) after which there's nothing left to do but jump up to the meta-meta level of politics and explain, to anyone capable of hearing it, why I think I've accumulated enough evidence for the assumption of good faith to have been empirically falsified.[^symmetrically-not-assuming-good-faith] [^symmetrically-not-assuming-good-faith]: Obviously, if we're abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm adhering to standards of intellectual conduct and being transparent about my motivations, but I'm not perfect, and, unlike Yudkowsky, I'm not so absurdly mendaciously arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they have a case that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate. @@ -527,7 +505,7 @@ Scott Alexander chose Feelings, but I can't hold that against him, because Scott [^hexaco]: The authors of the [HEXACO personality model](https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure) may have gotten something importantly right in [grouping "honesty" and "humility" as a single factor](https://en.wikipedia.org/wiki/Honesty-humility_factor_of_the_HEXACO_model_of_personality). -Eliezer Yudkowsky did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608), who [complains that "too many people think it's unvirtuous to shut up and listen to [him]"](https://twitter.com/ESYudkowsky/status/1509944888376188929). +Eliezer Yudkowsky did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself a rationality master so superior to mere Earthlings that he might as well be from dath ilan, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608). He [complains that "too many people think it's unvirtuous to shut up and listen to [him]"](https://twitter.com/ESYudkowsky/status/1509944888376188929). In making such boasts, I think Yudkowsky is opting in to being held to higher standards than other mortals. If Scott Alexander gets something wrong when I was trusting him to be right, that's disappointing, but I'm not the victim of false advertising, because Scott Alexander doesn't claim to be anything more than some guy with a blog. If I trusted him more than that, that's on me. @@ -537,15 +515,11 @@ Because, I did, actually, trust him. Back in 2009 when _Less Wrong_ was new, we Part of what made him so trustworthy back then was that he wasn't asking for trust. He clearly _did_ think it was [unvirtuous to just shut up and listen to him](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus): "I'm not sure that human beings realistically _can_ trust and think at the same time," [he wrote](https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science). He was always arrogant, but it was tempered by the expectation of being held to account by arguments rather than being deferred to as a social superior. "I try in general to avoid sending my brain signals which tell it that I am high-status, just in case that causes my brain to decide it is no longer necessary," [he wrote](https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why). -He visibly [cared about other people being in touch with reality](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business). "I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before," [he wrote](https://www.greaterwrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking/comment/rYKwptdgLgD2dBnHY). (I can testify that this is true: while sharing a car ride with Anna Salamon in 2011, he told me I had B.O.) - -Telling people about their body odor represents an above-and-beyond devotion to truth-telling: it's an area where people would benefit from feedback (if you know, you can invest in deodorant) but aren't getting that feedback by default (because no one wants to be so rude as to tell people they smell bad). - -Really, a lot of the epistemic heroism here is just in [noticing](https://www.lesswrong.com/posts/SA79JMXKWke32A3hG/original-seeing) the conflict between Feelings and Truth, between Politeness and Truth, rather than necessarily acting on it. If telling a person they smell bad would predictably meet harsh social punishment, I couldn't blame someone for consciously choosing silence and safety over telling the truth. +He visibly [cared about other people being in touch with reality](https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business). "I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before," [he wrote](https://www.greaterwrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking/comment/rYKwptdgLgD2dBnHY). (I can testify that this is true: while sharing a car ride with Anna Salamon in 2011, he told me I had B.O.)[^bo-heroism] -What I can and do blame someone for is actively fighting for Feelings while misrepresenting himself as the rightful caliph of epistemic rationality. There are a lot of trans people who would benefit from feedback that they don't pass but aren't getting that feedback by default. I wouldn't necessarily expect Yudkowsky to provide it. (I don't, either.) +[^bo-heroism]: A lot of the epistemic heroism here is just in [noticing](https://www.lesswrong.com/posts/SA79JMXKWke32A3hG/original-seeing) the conflict between Feelings and Truth, between Politeness and Truth, rather than necessarily acting on it. If telling a person they smell bad would predictably meet harsh social punishment, I couldn't blame someone for consciously choosing silence and safety over telling the truth. -I _would_ expect the person who wrote the Sequences not to publicly proclaim that the important thing is the feelings of people describing reasons someone does not like to be tossed into a Smells Bad bucket which don't bear on the factual question of whether someone smells bad. + What I can and do blame someone for is actively fighting for Feelings while misrepresenting himself as the rightful caliph of epistemic rationality. There are a lot of trans people who would benefit from feedback that they don't pass but aren't getting that feedback by default. I wouldn't necessarily expect Yudkowsky to provide it. (I don't, either.) I _would_ expect the person who wrote the Sequences not to publicly proclaim that the important thing is the feelings of people describing reasons someone does not like to be tossed into a Smells Bad bucket which don't bear on the factual question of whether someone smells bad. That person is dead now, even if his body is still breathing. @@ -571,7 +545,7 @@ The modern Yudkowsky [writes](https://twitter.com/ESYudkowsky/status/10967695793 I notice that this advice fails to highlight the possibility that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left shaking my head in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) sadness about the mortal frailty of my former hero, and disgust at his duplicity. **If Eliezer Yudkowsky can't _unambiguously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_.** -A few clarifications are in order here. First, as with "bad faith", this usage of "fraud" isn't a meaningless [boo light](https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights). I specifically and literally mean it in [_Merriam-Webster_'s sense 2.a., "a person who is not what he or she pretends to be"](https://www.merriam-webster.com/dictionary/fraud)—and I think I've made my case. Someone who disagrees with my assessment needs to argue that I've gotten some specific thing wrong, [rather than objecting to character attacks on procedural grounds](https://www.lesswrong.com/posts/pkaagE6LAsGummWNv/contra-yudkowsky-on-epistemic-conduct-for-author-criticism). +A few clarifications are in order here. First, this usage of "fraud" isn't a meaningless [boo light](https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights). I specifically and literally mean it in [_Merriam-Webster_'s sense 2.a., "a person who is not what he or she pretends to be"](https://www.merriam-webster.com/dictionary/fraud)—and I think I've made my case. Someone who disagrees with my assessment needs to argue that I've gotten some specific thing wrong, [rather than objecting to character attacks on procedural grounds](https://www.lesswrong.com/posts/pkaagE6LAsGummWNv/contra-yudkowsky-on-epistemic-conduct-for-author-criticism). Second, it's a conditional: _if_ Yudkowsky can't unambiguously choose Truth over Feelings, _then_ he's a fraud. If he wanted to come clean—if he decided after all that he wanted it to be common knowledge in his Caliphate that gender-dysphoric people can stand what is true, because we are already enduring it—he could do so at any time. -- 2.17.1