From 7fb8668bed5cb8806a57e8b45cd38c915a7136d6 Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Fri, 22 Dec 2023 17:19:45 -0800 Subject: [PATCH] memoir: pt. 4 red team edits --- ...-that-exhibit-generally-rationalist-principles.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index 12e3bc1..73d8212 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -41,7 +41,7 @@ There are a lot of standard caveats that go here which Scott would no doubt scru [^bet]: It's just—how much do you want to bet on that? How much do you think _Scott_ wants to bet? -But anyone who's read _and understood_ Charles Murray's work, knows that [Murray also includes the standard caveats](/2020/Apr/book-review-human-diversity/#individuals-should-not-be-judged-by-the-average)![^murray-caveat] (Even though the one about group differences not implying anything about individuals is [actually wrong](/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/).) The _Times_'s insinuation that Scott Alexander is a racist _like Charles Murray_ seems like a "[Gettier](https://en.wikipedia.org/wiki/Gettier_problem) attack": the charge is essentially correct, even though the evidence used to prosecute the charge before a jury of distracted _New York Times_ readers is completely bogus. +But anyone who's read _and understood_ Charles Murray's work, knows that [Murray also includes the standard caveats](/2020/Apr/book-review-human-diversity/#individuals-should-not-be-judged-by-the-average)![^murray-caveat] (Even though the one about group differences not implying anything about individuals is [technically wrong](/2022/Jun/comment-on-a-scene-from-planecrash-crisis-of-faith/).) The _Times_'s insinuation that Scott Alexander is a racist _like Charles Murray_ seems like a "[Gettier](https://en.wikipedia.org/wiki/Gettier_problem) attack": the charge is essentially correct, even though the evidence used to prosecute the charge before a jury of distracted _New York Times_ readers is completely bogus. [^murray-caveat]: For example, the introductory summary for Ch. 13 of _The Bell Curve_, "Ethnic Differences in Cognitive Ability", states: "Even if the differences between races were entirely genetic (which they surely are not), it should make no practical difference in how individuals deal with each other." @@ -115,7 +115,7 @@ Yudkowsky [replied that](/images/yudkowsky-we_need_to_exclude_evil_bullies.png) I'll agree that the problems shouldn't be confused. Psychology is complicated, and people have more than one reason for doing things: I can easily believe that Brennan was largely driven by bully-like motives even if he told himself a story about being a valiant whistleblower defending Cade Metz's honor against Scott's deception. -But I think it's important to notice both problems, instead of pretending that the only problem was Brennan's disregard for Alexander's privacy. It's one thing to believe that people should keep promises that they, themselves, explicitly made. But instructing commenters not to link to the email seems to imply not just that Brennan should keep _his_ promises, but that everyone else is obligated to participate in a conspiracy to conceal information that Alexander would prefer concealed. I can see an ethical case for it, analogous to returning stolen property after it's already been sold and expecting buyers not to buy items that they know have been stolen. (If Brennan had obeyed Alexander's confidentiality demand, we wouldn't have an email to link to, so if we wish Brennan had obeyed, we can just act as if we don't have an email to link to.) +But I think it's important to notice both problems, instead of pretending that the only problem was Brennan's disregard for Alexander's privacy. It's one thing to believe that people should keep promises that they, themselves, explicitly made. But instructing commenters not to link to the email seems to suggest not just that Brennan should keep _his_ promises, but that everyone else should to participate in a conspiracy to conceal information that Alexander would prefer concealed. I can see an ethical case for it, analogous to returning stolen property after it's already been sold and expecting buyers not to buy items that they know have been stolen. (If Brennan had obeyed Alexander's confidentiality demand, we wouldn't have an email to link to, so if we wish Brennan had obeyed, we can just act as if we don't have an email to link to.) But there's also a non-evil-bully case for wanting to reveal information, rather than participate in a cover-up to protect the image of the "rationalists" as non-threatening to the progressive egregore. If the orchestrators of the cover-up can't even acknowledge to themselves that they're orchestrating a cover-up, they're liable to be confusing themselves about other things, too. @@ -356,9 +356,9 @@ With this understanding of bad faith, we can read Yudkowsky's "it is sometimes p To his credit, he will admit that he's only willing to address a selected subset of arguments—but while doing so, he claims an absurd "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking" while blatantly mischaracterizing his opponents' beliefs! ("Gendered Pronouns for Everyone and Asking To Leave the System Is Lying" doesn't pass anyone's [ideological Turing test](https://www.econlib.org/archives/2011/06/the_ideological.html).) -Counterarguments aren't completely causally inert: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity (Such That Some People Perceive an Interest in Using Language Accordingly), Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But ultimately, Yudkowsky is going to say what he needs to say in order to protect his reputation with progressives, as is sometimes personally prudent. +Counterarguments aren't completely causally inert: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity (Such That Some People Perceive an Interest in Using Language Accordingly), Yudkowsky will put some effort into coming up with some ingenious excuse to dodge your claim, in ways that exhibit generally rationalist principles. Ultimately, Yudkowsky is going to say what he needs to say in order to protect his reputation with progressives, as is sometimes personally prudent. -Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior—maybe there are circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from mine, I sometimes anticipate that if my interlocutor knew what I was thinking, they wouldn't want to talk to me, so I word my replies so that I [seem more ideologically aligned with them than I actually am](https://geekfeminism.fandom.com/wiki/Concern_troll). (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the minimal amount of strategic bad faith needed to keep the conversation going—to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. I'm willing to defend this behavior. There _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my alignment. +Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that he's making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior—maybe there are circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from mine, I sometimes anticipate that if my interlocutor knew what I was thinking, they wouldn't want to talk to me, so I word my replies so that I [seem more ideologically aligned with them than I actually am](https://geekfeminism.fandom.com/wiki/Concern_troll). (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the minimal amount of strategic bad faith needed to keep the conversation going—to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. I'm willing to defend this behavior. There _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my alignment. That is, my bad faith concern-trolling gambit of misrepresenting my ideological alignment to improve the discussion seems beneficial to the accuracy of our collective beliefs about the topic. (And the topic is presumably of greater collective interest than which "side" I personally happen to be on.) @@ -391,7 +391,7 @@ In the comments, he added: He [later clarified on Twitter](https://twitter.com/ESYudkowsky/status/1404821285276774403), "It is not trans-specific. When people tell me I helped them, I mostly believe them and am happy." -But if Stalin is committed to convincing gender-dysphoric males that they need to cut their dicks off, and you're committed to not disagreeing with Stalin, you _shouldn't_ mostly believe it when gender-dysphoric males thank you for providing the final piece of evidence they needed to realize that they need to cut their dicks off, for the same reason a self-aware Republican shill shouldn't uncritically believe it when people thank him for warning them against Democrat treachery. We know—he's told us very clearly—that Yudkowsky isn't trying to provide gender-dysphoric people with the full state of information that they would need to decide on the optimal quality-of-life interventions. He's playing on a different chessboard. +But if Stalin is committed to convincing gender-dysphoric males that they need to cut their dicks off, and you're committed to not disagreeing with Stalin, you _shouldn't_ mostly believe it when gender-dysphoric males thank you for providing the final piece of evidence they needed to realize that they need to cut their dicks off, for the same reason a self-aware Republican shill shouldn't uncritically believe it when people thank him for warning them against Democrat treachery. We know—he's told us very clearly—that Yudkowsky isn't trying to be a neutral purveyor of decision-relevant information on this topic. He's playing on a different chessboard. ### A Fire of Inner Life @@ -475,7 +475,7 @@ In the context of AI alignment theory, Yudkowsky has written about a "nearest un Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try forcibly administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not to forcibly adminster any kind of drug, it might switch to forcibly implanting electrodes in humans' brains, or just _paying_ the humans to take heroin, _&c._ -It's the same thing with Yudkowsky's political risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that his own pre–[Great Awokening](https://www.vox.com/2019/3/22/18259865/great-awokening-white-liberals-race-polling-trump-2020) writings [explain why that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that his earlier writings also explain why [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out that [that's not what's going on](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of something—but at this point, _I have no reason to care_. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a waste of everyone's time; he has a [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) that doesn't involve trying to inform you. +It's the same thing with Yudkowsky's political risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that his own writings from fifteen years ago [explain why that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that his earlier writings also explain why [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out that [that's not what's going on](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of something—but at this point, _I have no reason to care_. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a waste of everyone's time; he has a [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) that doesn't involve trying to inform you. Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. -- 2.17.1