From bba25ffa61c1dde92abdd620ab139cda1303a150 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Fri, 21 Oct 2022 22:55:11 -0700 Subject: [PATCH] =?utf8?q?memoir:=20can't=20leave=20well=20enough=20alone?= =?utf8?q?=20(to=20=C2=A7=20end)?= MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit --- ...-hill-of-validity-in-defense-of-meaning.md | 38 +++++++++++++++++-- notes/a-hill-of-validity-sections.md | 8 ++-- 2 files changed, 38 insertions(+), 8 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 4b0b535..a54054f 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -516,7 +516,7 @@ But _selectively_ creating clarity down but not up power gradients just reinforc Somewhat apologetically, I replied that the distinction between truthfully, publicly criticizing group identities and _named individuals_ still seemed very significant to me?—and that avoiding leaking info from private conversations seemed like an important obligation, too. I would be way more comfortable writing [a scathing blog post about the behavior of "rationalists"](/2017/Jan/im-sick-of-being-lied-to/), than about a specific person not adhering to good discourse norms in an email conversation that they had good reason to expect to be private. I thought I was consistent about this: contrast my writing to the way that some anti-trans writers name-and-shame particular individuals. (The closest I had come was [mentioning Danielle Muscato as someone who doesn't pass](/2018/Dec/untitled-metablogging-26-december-2018/#photo-of-danielle-muscato)—and even there, I admitted it was "unclassy" and done in desperation of other ways to make the point having failed.) I had to acknowledge that criticism of non-exclusively-androphilic trans women in general _implied_ criticism of Jessica, and criticism of "rationalists" in general _implied_ criticism of Yudkowsky and Alexander and me, but the extra inferential step and "fog of probability" seemed useful for making the speech act less of an attack? Was I wrong? -Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that as a more serious threat to his safety. (And rationalists and trans women are exactly the sort of people that get targeted by the same people who target Jews.) +Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that as a more serious threat to his safety. (And rationalists and trans women are exactly the sort of people that get targeted by the same people who target Jews.) ----- @@ -815,9 +815,41 @@ The paragraph from "Kolmogorov Complicity" that I was thinking of was (bolding m I perceived a pattern where people who are in trouble with the orthodoxy feel an incentive to buy their own safety by denouncing _other_ heretics: not just _disagreeing_ with the other heretics _because those other heresies are in fact mistaken_, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld. -Suppose there are five true heresies, but anyone who's on the record believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community, because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy-of-categorization right in full generality (even though he's [written](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) [exhaustively](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) about the correct way, and he and I have a common enemy in the social-justice egregore): _he couldn't afford to_. He'd already spent his Overton budget on anti-feminism. +Suppose there are five true heresies, but anyone who's on the record believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community, because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy-of-categorization right in full generality (even though he'd [written](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) [exhaustively](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) about the correct way, and he and I have a common enemy in the social-justice egregore): _he couldn't afford to_. He'd already spent his Overton budget on anti-feminism. -[TODO: ... remainder of "can't leave well enough alone" conversation] +Scott (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was _awfully_ tempting to try to jump there. (Of course, it would be _better_ if there was a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the falling back to the one-heresy-per-thinker equilibrium.) + +Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad (and that it would be bad if I were doing that without "intending" to). I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had _learned stuff_ from reading far-right authors (most notably Moldbug), and from talking with my very smart neoreactionary (and former _Less Wrong_-er) friend. I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](#less-precise-is-more-violent) (when I was talking about criticizing "rationalists"). + +Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.lesswrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s?commentId=TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.) + +I did believe that Yudkowsky believed that neoreaction was not worth engaging with. I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this-and-such author." I wasn't accusing Yudkowsky of being insincere. + +What I _did_ think was that the need to keep up appearances of not-being-a-right-wing-Bad-Guy was a pretty serious distortion on people's beliefs, because there are at least a few questions-of-fact where believing the correct answer can, in today's political environment, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to _notice that this is a rationality problem_, and to _not actively make the problem worse_, and I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that the extent to which I think I've learned valuable things from Moldbug, made me less welcome in Yudkowsky's fiefdom. + +Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" _as stated_, but "I do not welcome support from those quarters" still seemed like a _pointlessly_ partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's _obviously not my fault_." + +Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to _also_ to acknowledge, _e.g._, racial IQ differences? + +I agreed that it would be helpful, but realistically, I didn't see why Yudkowsky should want to poke the race-differences hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, you either become an [evidence-filtering clever arguer](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence), or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.) + +It was as if there was a "Say Everything" attractor, and a "Say Nothing" attractor, and _my_ incentives were pushing me towards the "Say Everything" attractor—but that was only because I had [Something to Protect](/2019/Jul/the-source-of-our-power/) in the forbidden zone and I was a good programmer (who could therefore expect to be employable somewhere, just as [James Damore eventually found another job](https://twitter.com/JamesADamore/status/1034623633174478849)). Anyone in less extreme circumstances would find themselves being pushed to the "Say Nothing" attractor. + +It was instructive to compare this new disavowal of neoreaction with one from 2013 (quoted by [Moldbug](https://www.unqualified-reservations.org/2013/11/mr-jones-is-rather-concerned/) and [others](https://medium.com/@2045singularity/white-supremacist-futurism-81be3fa7020d)[^linkrot]), in response to a _TechCrunch_ article citing former MIRI employee Michael Anissimov's neoreactionary blog _More Right_: + +[^linkrot]: The original _TechCrunch_ comment would seem to have succumbed to [linkrot](https://www.gwern.net/Archiving-URLs#link-rot). + +> "More Right" is not any kind of acknowledged offspring of Less Wrong nor is it so much as linked to by the Less Wrong site. We are not part of a neoreactionary conspiracy. We are and have been explicitly pro-Enlightenment, as such, under that name. Should it be the case that any neoreactionary is citing me as a supporter of their ideas, I was never asked and never gave my consent. [...] +> +> Also to be clear: I try not to dismiss ideas out of hand due to fear of public unpopularity. However I found Scott Alexander's takedown of neoreaction convincing and thus I shrugged and didn't bother to investigate further. + +My "negotiating with terrorists" criticism did _not_ apply to the 2013 statement. "More Right" _was_ brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, _and_ the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing the McCarthyist persecution. + +The question was, what had specifically happened in the last six years to shift Eliezer's opinion on neoreaction from (paraphrased) "Scott says it's wrong, so I stopped reading" to (verbatim) "actively hostile"? Note especially the inversion from (both paraphrased) "I don't support neoreaction" (fine, of course) to "I don't even want _them_ supporting _me_" [(_?!?!_)](https://twitter.com/zackmdavis/status/1164329446314135552).[^them-supporting-me] + +[^them-supporting-me]: Humans with very different views on politics nevertheless have a common interest in not being transformed into paperclips! + +Did Yudkowsky get new information about neoreaction's hidden Badness parameter, or did moral coercion on him from the left intensify (because Trump and [because Berkeley](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/))? My bet was on the latter. ----- diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 18142a1..115047c 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,9 +1,6 @@ -on deck— -_ 2019 event summary -_ giving up on him, don't want to waste any more time, last email - - With internet available— +_ "McCarthyist" should link to Moldbug on the "Brown Scare" +_ check on wording of Yudkowsky's anti-Moldbug comment _ my public objection to Yudkowsky's denunciation of NRx https://twitter.com/zackmdavis/status/1164259164819845120 _ except when it's net bad to have concluded Y: https://www.greaterwrong.com/posts/BgEG9RZBtQMLGuqm7/[Error%20communicating%20with%20LW2%20server]/comment/LgLx6AD94c2odFxs4 _ my history of sniping in Yudkowsky's mentions @@ -43,6 +40,7 @@ _ quote one more "Hill of Meaning" Tweet emphasizing fact/policy distinction _ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) _ context of his claim to not be taking a stand _ clarify "Merlin didn't like Vassar" example about Mike's name +_ being friends with Anna desipite being political enemies (~May 2019) _ rephrase "gamete size" discussion to make it clearer that Yudkowsky's proposal also implicitly requires people to be agree about the clustering thing _ smoother transition between "deliberately ambiguous" and "was playing dumb"; I'm not being paranoid for attributing political motives to him, because he told us that he's doing it _ when I'm too close to verbatim-quoting someone's email, actually use a verbatim quote and put it in quotes -- 2.17.1