From: M. Taylor Saotome-Westlake Date: Wed, 29 Mar 2023 05:00:02 +0000 (-0700) Subject: memoir: progress towards discussing the real thing X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=068fdcc76e6d3c8ee32c333672a69d4e7798cf48;p=Ultimately_Untrue_Thought.git memoir: progress towards discussing the real thing --- diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 704e417..ee0847c 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -201,7 +201,7 @@ The meta-discussion on _Less Wrong_ started to get heated. Ruby claimed: [I replied to Ruby that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether or not you respect them as a thinker is _off-topic_. "You said X, but this is wrong because of Y" isn't a personal attack! -Jessica said that there's no point in getting mad at MOPs. I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was _explicitly_ about causal _vs._ social reality; it's possible that I wouldn't be inclined to be such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt. +Jessica said that there's no point in getting mad at [MOPs](http://benjaminrosshoffman.com/construction-beacons/). I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was _explicitly_ about causal _vs._ social reality; it's possible that I wouldn't be inclined to be such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt. Jessica ended up writing a post, ["Self-Consciousness Wants Everything to Be About Itself"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself), arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings, using as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community. @@ -217,9 +217,11 @@ Secret posse member said that the amount of social-justice talk in the post rose On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it. +------- + Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent popularity of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. -(Remember, this was 2019. After seeing what GPT-3/PaLM/DALL-E/_&c._ could do during the ["long May 2020"](https://twitter.com/MichaelTrazzi/status/1635871679133130752), it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for. I won't say, "How could I have known?", but at the time, I didn't, actually, know.) +(Remember, this was 2019. After seeing what GPT-3/PaLM/DALL-E/_&c._ could do during the ["long May 2020"](https://twitter.com/MichaelTrazzi/status/1635871679133130752), it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for.) I still sympathized with the "mainstream" pushback against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a _boring_ semantic argument, but I feared that until we invented better linguistic technology, the _boring_ semantic argument was going to _continue_ sucking up discussion bandwidth with others when it didn't need to. @@ -233,9 +235,31 @@ I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_ Ben replied that what mattered in the determination of manslaughter _vs._ murder was whether there was long-horizon optimization power towards the outcome of someone's death; it wasn't about what sentiments the killer rehearsed in their working memory. -On a phone call, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope _knew_ (he _had_ to know, at some level) that it asn't true. (I agreed that this usage of _fraud_ made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that. +On a phone call, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope knew (he _had_ to know, at some level) that it asn't true. (I agreed that this usage of _fraud_ made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that. -[TODO: secret thread with Ruby; "uh, guys??" to Steven and Anna; people say "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any _particular_ criticism as insufficiently tactful.] +------ + +Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft _Less Wrong_ post by some of our posse members and some of the _Less Wrong_ mods. (The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party [GreaterWrong](https://www.greaterwrong.com/) site; I [filed a bug](https://github.com/LessWrong2/Lesswrong2/issues/2161).) + +Ben wrote: + +> What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed _primarily_ as moves to attack some person or institution, pushing them into the outgroup. Frequently, people suggest to me an "equivalent" wording with a softer tone, which in fact omits important substantive criticisms I mean to make, while claiming to understand what's at issue. + +Ray Arnold (another _Less Wrong_ mod) replied: + +> My core claim is: "right now, this isn't possible, without a) it being heard by many people as an attack, b) without people having to worry that other people will see it as an attack, even if they don't." +> +> It seems like you see this something as _"there's a precious thing that might be destroyed"_ and I see it as _"a precious thing does not exist and must be created, and the circumstances in which it can exist are fragile."_ It might have existed in the very early days of LessWrong. But the landscape now is very different than it was then. With billions of dollars available and at stake, what worked then can't be the same thing as what works now. + +(!!)[^what-works-now] + +[^what-works-now]: Ray qualifies this in the next paragraph: + + > [in public. In private things are much easier. It's _also_ the case that private channels enable collusion—that was an update [I]'ve made over the course of the conversation. ] + + Even with the qualifier, I still think this deserves a "(!!)". + +Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread. Now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene was still just a philosophy club, catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it strictly _more_ important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory action, which philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. [TODO: "progress towards discussing the real thing" * Jessica acks Ray's point of "why are you using court language if you don't intend to blame/punish" @@ -245,6 +269,8 @@ On a phone call, Michael made an analogy between EA and Catholicism. The Pope wa * Ben thinks SJW blame is obviously good ] +[TODO: secret thread with Ruby; "uh, guys??" to Steven and Anna; people say "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any _particular_ criticism as insufficiently tactful.] + [TODO: epistemic defense meeting; * I ended up crying at one point and left the room for while * Jessica's summary: "Zack was a helpful emotionally expressive and articulate victim. It seemed like there was consensus that "yeah, it would be better if people like Zack could be warned somehow that LW isn't doing the general sanity-maximization thing anymore"."