From 1559ce0ac95418e1dc1ddc0734f44cc7e8a61109 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sat, 25 Mar 2023 23:17:41 -0700 Subject: [PATCH] memoir: "Causal vs. Social Reality" scuffle --- .../if-clarity-seems-like-death-to-them.md | 45 ++++++++++++++----- 1 file changed, 33 insertions(+), 12 deletions(-) diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index c4a05db..74e04bb 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -165,33 +165,54 @@ In an email (Subject: "LessWrong.com is dead to me"), Jessica identified the thr Posting on _Less Wrong_ made sense as harm-reduction, but the only way to get people to stick up for truth would be to convert them to _a whole new worldview_, which would require a lot of in-person discussions. She bought up the idea of starting a new forum to replace _Less Wrong_. -Ben said that trying to discuss with the _Less Wrong_ mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." He was less optimistic about harm-reduction; participating on the site was implicitly endorsing it by submitting the rule of the karma and curation systems. +Ben said that trying to discuss with the _Less Wrong_ mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." The premise should be, "If this is within the Overton window for _Less Wrong_ moderators, there's a serious confusion on the conditions required for discourse", not on scapegoating individuals. He was less optimistic about harm-reduction; participating on the site was implicitly endorsing it by submitting the rule of the karma and curation systems. Secret posse member expressed sadness about how the discussion on "The Incentives" demonstrated that the community he loved—including dear friends—was in a very bad way. Michael (in a separate private discussion) had said he was glad to hear about the belief-update. Secret posse member said that Michael saying that also made them sad, because it seemed discordant to be happy about sad news. Michael wrote (in the thread): > I['m] sorry it made you sad. From my perspective, the question is no[t] "can we still be friends with such people", but "how can we still be friends with such people" and I am pretty certain that understanding their perspective if an important part of the answer. If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship. -[TODO— - * Jessica: scortched-earth campaign should mostly be in meatspace social reality - * my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX) +------ + +I got into a scuffle with Ruby (someone who had newly joined the _Less Wrong_ mod team) on his post on ["Causal Reality _vs. Social Reality"](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality). One section of the post asks, "Why people aren't clamoring in the streets for the end of sickness and death?" and gives the answer that it's because no one else is; people live in a social reality that accepts death as part of the natural order, even though life extension seems like it should be physically possible in causal reality. + +I didn't think this was a good example. "Clamoring in the streets" (even if you interpreted it as a metonym for other forms of mass political action) seemed like the kind of thing that would be recommended by social-reality thinking, rather than causal-reality thinking. How, causally, would the action of clamoring in the streets lead to the outcome of the end of sickness and death? I would expect means–end reasoning about causal reality to instead recommend things like "working on or funding biomedical research". + +Ruby [complained that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=7b2pWiCL33cqhTabg) my tone was too combative, and asked for more charity and collaborative truth-seeking[^collaborative-truth-seeking] in any future comments. + +[^collaborative-truth-seeking]: [No one ever seems to be able to explain to me what this phrase means.](https://www.lesswrong.com/posts/uvqd3YiBcrPxXzxQM/what-does-the-word-collaborative-mean-in-the-phrase) + +(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the _Less Wrong_ FAQ. Maybe he couldn't let me "win" again so quickly?) + +I emailed the coordination group about it, insofar as gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices: + +> he seems to be conflating transhumanist optimism with "causal reality", and then tone-policing me when I try to model good behavior of what means-end reasoning about causal reality actually looks like. This ... seems pretty cultish to me?? Like, it's fine and expected for this grade of confusion to be on the website, but it's more worrisome when it's coming from the mod team.[^rot-13] + +[^rot-13]: This part of the email was actually [rot-13'd](https://rot13.com) to let people write up their independent component without being contaminated by me; I reproduce the plaintext here. -> I'm also not sure if I'm sufficiently clued in to what Ben and Jessica are modeling as Blight, a coherent problem, as opposed to two or six individual incidents that seem really egregious in a vaguely similar way that seems like it would have been less likely in 2009?? +The meta-discussion on _Less Wrong_ started to get heated. Ruby claimed: - * Vassar: "Literally nothing Ben is doing is as aggressive as the basic 101 pitch for EA." - * Ben: we should be creating clarity about "position X is not a strawman within the group", rather than trying to scapegoat individuals - * my scuffle with Ruby on "Causal vs. Social Reality" (my previous interaction with Ruby had been on the LW FAQ; maybe he couldn't let me "win" again so quickly?) - * it gets worse: https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality#NbrPdyBFPi4hj5zQW - * Ben's comment: "Wow, he's really overtly arguing that people should lie to him to protect his feelings." +> Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express "I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] There's an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't. + + + +"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. + + +[TODO— * Jessica: "tone arguments are always about privileged people protecting their feelings, and are thus in bad faith. Therefore, engaging with a tone argument as if it's in good faith is a fool's game, like playing chess with a pigeon. Either don't engage, or seek to embarrass them intentionally." * there's no point at being mad at MOPs * me (1 Jul): I'm a _little bit_ mad, because I specialize in cognitive and discourse strategies that are _extremely susceptible_ to being trolled like this - * me to "Wilhelm" 1 Jul: "I'd rather not get into fights on LW, but at least I'm 2-0-1" - * "collaborative truth seeking" but (as Michael pointed out) politeness looks nothing like Aumann agreement + * I wouldn't be such a hardass if not for the prompt + +(I remarked to "Wilhelm": I'd rather not get into fights on _Less Wrong_, but at least I'm 2–0–1.) + * 2 Jul: Jessica is surprised by how well "Self-consciousness wants to make everything about itself" worked; theory about people not wanting to be held to standards that others aren't being held to * Michael: Jessica's example made it clear she was on the side of social justice * secret posse member: level of social-justice talk makes me not want to interact with this post in any way ] +------ + On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it. Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent popularity of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. -- 2.17.1