From cb7d9d1ef258b74985fd17c6a0ce0cf88abdc4f7 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sat, 23 Jul 2022 19:46:46 -0700 Subject: [PATCH] long confrontation 15: should I be satisfied? The scratcher ends with "COIN". (No prize.) This has been a pretty productive day! If I can just keep doing this on all of my non-dayjob days, I can finish this, and be at peace. And maybe let go of my anger at Yudkowsky, once I'm done processing it? As I mention in my marketing notes, I do (in my wiser moments) think this document actually hits better if it's not an angry attack. It's a memoir that explains why I want to attack. But ultimately, attacks are boring: to the extent that my main message is "Yudkowsky Dishonest and Bad", no one will or should want to read this. But if I manage to present the socio-psychological insights of our Vassarite coordination group, that's actually interesting. --- ...-hill-of-validity-in-defense-of-meaning.md | 35 ++++++++++++------- notes/a-hill-marketing.txt | 4 +++ 2 files changed, 27 insertions(+), 12 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 73639b6..31fc549 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -182,20 +182,17 @@ As for the attempt to intervene on Yudkowsky—well, [again](/2022/TODO/blanchar > [TODO: "not ontologically confused" concession https://twitter.com/ESYudkowsky/status/1068071036732694529 ] -Look at that! The great _Eliezer Yudkowsky_ said that my position is not ontologically confused. That's _probably_ high praise coming from him! You might think that should be the end of the matter. Yudkowsky denounced a particular philosophical confusion; I already had a related objection written up; and he acknowledged my objection as not being the confusion he was trying to police. I should be satisfied, right? It would be _greedy_ of me to expect more, right? +Look at that! The great _Eliezer Yudkowsky_ said that my position is not ontologically confused. That's _probably_ high praise coming from him! You might think that should be the end of the matter. Yudkowsky denounced a particular philosophical confusion; I already had a related objection written up; and he acknowledged my objection as not being the confusion he was trying to police. I _should_ be satisfied, right? -[TODO: but this little "not ontologically confused" at the bottom of the thread was much less visible and loud than the bold, arrogant top-level pronouncement insinuating that GCs are philosophically confused. Was I greedy to want something louder? +I wasn't, in fact, satisfied. This little "not ontologically confused" concession buried in the replies was _much less visible_ than the bombastic, arrogant top level pronouncement insinuating that resistance to gender-identity claims _was_ confused. I expected that the typical reader who had gotten the impression from the initial thread that gender-identity skeptics didn't have a leg to stand on (according to the great Eliezer Yudkowsky), would not, actually, be disabused of the impression by this little follow-up. Was it greedy of me to want something _louder_? -Greedy or not, I wasn't satisfied. - - -On 1 December, I wrote to Scott Alexander, asking if there was any chance of an _explicit_ and _loud_ clarification or partial-retraction of ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (Subject: "super-presumptuous mail about categorization and the influence graph"). _Forget_ my boring whining about the autogynephilia/two-types thing, I said—that's a complicated empirical claim, and _not_ the key issue. +Greedy or not, I wasn't done flipping out. On 1 December, I wrote to Scott Alexander, asking if there was any chance of an _explicit_ and _loud_ clarification or partial-retraction of ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (Subject: "super-presumptuous mail about categorization and the influence graph"). _Forget_ my boring whining about the autogynephilia/two-types thing, I said—that's a complicated empirical claim, and _not_ the key issue. The _issue_ is that category boundaries are not arbitrary (if you care about intelligence being useful): you want to [draw your category boundaries such that](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) things in the same category are similar in the respects that you care about predicting/controlling, and you want to spend your [information-theoretically limited budget](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) of short words on the simplest and most wide-rangingly useful categories. -It's true that the reason _I_ was continuing to freak out about this to the extent of sending you him this obnoxious email telling him what to write (seriously, what kind of asshole does that?!) has to with transgender stuff, but that's not the reason _Scott_ should care. Rather, it's like [his parable about whether thunder or lightning comes first](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/): there aren't many direct non-rationalizable-around consequences of the sacred dogmas that thunder comes before lightning or that biological sex somehow isn't real; the problem is that the need to _defend_ the sacred dogma _destroys everyone's ability to think_. If our vaunted rationality techniques result in me having to spend dozens of hours patiently explaining why I don't think that I'm a woman and that [the person in this photograph](https://daniellemuscato.startlogic.com/uploads/3/4/9/3/34938114/2249042_orig.jpg) isn't a woman, either (where "isn't a woman" is a convenient rhetorical shorthand for a much longer statement about [naïve Bayes models](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) and [high-dimensional configuration spaces](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) and [defensible Schelling points for social norms](https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes)), then our techniques are _worse than useless_. If Galileo ever muttered "And yet it moves", there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense or some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in 2014, but in retrospect, _maybe_ it was a _bad_ idea to build a [memetic superweapon](https://archive.is/VEeqX) that says the number of epicycles _doesn't matter_. +It's true that the reason _I_ was continuing to freak out about this to the extent of sending him this obnoxious email telling him what to write (seriously, what kind of asshole does that?!) had to with transgender stuff, but that's not the reason _Scott_ should care. Rather, it's like [his parable about whether thunder or lightning comes first](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/): there aren't many direct non-rationalizable-around consequences of the sacred dogmas that thunder comes before lightning or that biological sex somehow isn't real; the problem is that the need to _defend_ the sacred dogma _destroys everyone's ability to think_. If our vaunted rationality techniques result in me having to spend dozens of hours patiently explaining why I don't think that I'm a woman and that [the person in this photograph](https://daniellemuscato.startlogic.com/uploads/3/4/9/3/34938114/2249042_orig.jpg) isn't a woman, either (where "isn't a woman" is a convenient rhetorical shorthand for a much longer statement about [naïve Bayes models](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) and [high-dimensional configuration spaces](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) and [defensible Schelling points for social norms](https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes)), then our techniques are _worse than useless_. If Galileo ever muttered "And yet it moves", there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense or some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in 2014, but in retrospect, _maybe_ it was a _bad_ idea to build a [memetic superweapon](https://archive.is/VEeqX) that says the number of epicycles _doesn't matter_. -And the reason to write this is a desperate email plea to Scott Alexander when I could be working on my own blog, was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like Eliezer and you rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who didn't [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the Zeitgeist, the winning move is "persuade Scott Alexander". +And the reason to write this is a desperate email plea to Scott Alexander when I could be working on my own blog, was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like him and Yudkowsky rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who didn't [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the Zeitgeist, the winning move is "persuade Scott Alexander". So, what do you say, Scott? Back in 2010, the rationalist community had a shared understanding that the function of language is to describe reality. Now, we don't. If you don't want to cite my creepy blog about my creepy fetish, that's fine. I like getting credit, but the important thing is that this "No, the Emperor isn't naked—oh, well, we're not claiming that he's wearing any garments—it would be pretty weird if we were claiming that!—it's just that utilitarianism implies that the social property of clothedness should be defined this way because to do otherwise would be really mean to people who don't have anything to wear" gaslighting maneuver needs to _die_. You alone can kill it. @@ -212,9 +209,9 @@ Scott says, "It seems to me pretty obvious that the mental health benefits to tr [TODO: connecting with Aurora 8 December, maybe not important] -Anna told me that my "You have to pass my litmus test or I lose all respect for you as a rationalist" attitude was psychologically coercive. I agreed—I was even willing to go up to "violent"—in the sense that it's [trying to apply social incentives towards an outcome rather than merely exchanging information](http://zackmdavis.net/blog/2017/03/an-intuition-on-the-bayes-structural-justification-for-free-speech-norms/). But sometimes you need to use violence in defense of self or property, even if violence is generally bad. If we think of the "rationalist" label as intellectual property, maybe it's property worth defending, and if so, then "I can define a word any way I want" isn't obviously a terrible time to start shooting at the bandits? What makes my "... or I lose all respect for you as a rationalist" moves worthy of your mild reproach, but "You're not allowed to call this obviously biologically-female person a woman, or I lose all respect for you as not-an-asshole" merely a puzzling sociological phenomenon that might be adaptive in some not-yet-understood way? Isn't the violence-structure basically the same? Is there any room in civilization for self-defense? +Anna told me that my "You have to pass my litmus test or I lose all respect for you as a rationalist" attitude was psychologically coercive. I agreed—I was even willing to go up to "violent"—in the sense that it's [trying to apply social incentives towards an outcome rather than merely exchanging information](http://zackmdavis.net/blog/2017/03/an-intuition-on-the-bayes-structural-justification-for-free-speech-norms/). But sometimes you need to use violence in defense of self or property, even if violence is generally bad. If we think of the "rationalist" label as intellectual property, maybe it's property worth defending, and if so, then "I can define a word any way I want" isn't obviously a terrible time to start shooting at the bandits? What makes my "... or I lose all respect for you as a rationalist" moves worthy of your mild reproach, but "You're not allowed to call this obviously biologically-female person a woman, or I lose all respect for you as not-an-asshole" merely a puzzling sociological phenomenon that might be adaptive in some not-yet-understood way? Isn't the violence-structure basically the same? Is there any room in civilization for self-defense? -When I told Michael about this, he said that I was ethically or 'legally' in the right here, and the rationalist equivalent of a lawyer matters more for your claims than the equivalent of a scientist, and that Ben Hoffman (who I had already shared the thread with Scott with) would be helpful in solidifying my claims to IP defense. I said that I didn't _feel_ like I'm in the right, even if I can't point to a superior counterargument that I want to yield to, just because I'm getting fatigued from all the social-aggression I've been doing. (If someone tries to take your property and you shoot at them, you could be said to be the "aggressor" in the sense that you fired the first shot, even if you hope that the courts will uphold your property claim later.) +When I told Michael about this, he said that I was ethically or 'legally' in the right here, and the rationalist equivalent of a lawyer mattered more for my claims than the equivalent of a scientist, and that Ben Hoffman (who I had already shared the thread with Scott with) would be helpful in solidifying my claims to IP defense. I said that I didn't _feel_ like I'm in the right, even if I can't point to a superior counterargument that I want to yield to, just because I'm getting fatigued from all the social-aggression I've been doing. (If someone tries to take your property and you shoot at them, you could be said to be the "aggressor" in the sense that you fired the first shot, even if you hope that the courts will uphold your property claim later.) [TODO: re Ben's involvement—I shared Scott thread with Ben and Katie; Michael said "could you share this with Ben? I think he is ready to try & help." on 17 December 19 December @@ -255,9 +252,23 @@ Sarah shying away, my rallying cry— > In phrasing it that way, you're not saying that composites are bad; it's just that it makes sense to use language to asymmetrically distinguish between the natural thing that already existed, and the synthetic thing that has been deliberately engineered to resemble the original thing as much as possible. -[TODO: 4 January plea to Yudkowsky again—this is the part that I should base my thread analysis off of] +[TODO: 4 January plea to Yudkowsky again] + +[TODO: Ben— +> I am pretty worried that if I actually point out the physical injuries sustained by some of the smartest, clearest-thinking, and kindest people I know in the Rationalist community as a result of this sort of thing, I'll be dismissed as a mean person who wants to make other people feel bad. + +I assumed he was talking about Katie's institutionalization, not SRS + +> where gaslighting escalated into more tangible harm in a way that people wouldn't know about by default. In contrast, people already know that bottom surgery is a thing; you just have reasons to think it's Actually Bad—reasons that your friends can't engage with if we don't know what you're talking about. It's already bad enough that Eliezer is being so pointlessly cagey; if everyone does it, then we're really doomed. + +Ben's actual worry— +> "saying the wrong keywords causes people in this conversation to start talking about me using the wrong keywords in ways that cause me illegible, hard-to-trace damage." + +> marginalization happens through processes designed to be hard for the "rider" in the horse-rider system to see. This makes the threat model hard to communicate, which puts the rider in a double-bind with respect to things like courage, because it's dependent on the horse for most of its information. + +] -[TODO: ... email analysis goes up to 4 January] +[TODO: ... email analysis goes up to 14 January] [TODO: proton concession] diff --git a/notes/a-hill-marketing.txt b/notes/a-hill-marketing.txt index aae7476..ee59346 100644 --- a/notes/a-hill-marketing.txt +++ b/notes/a-hill-marketing.txt @@ -1,3 +1,7 @@ +Or maybe the summary comment would actually make things worse? It would turn the memoir into an instrument of war. But maybe the Whole Dumb Story speaks for itself. (In the process of writing, I realize that Yudkowsky _did_ give me a _lot_ of concessions—so many, that I shouldn't want to make war against him, because war is the wrong framing: I want to explain, to anyone who can still listen, about how it's terrible that our incentives make people take for granted that speech doesn't work.) + +------ + I could use some politics/ettiquiete advice I want to publish (as a comment on the _Less Wrong_ linkpost for my forthcoming memoir) a summary of Why I Don't Trust Eliezer Yudkowsky's Intellectual Honesty, and promote the comment on social media -- 2.17.1