+------
+
+I got into a scuffle with Ruby (someone who had newly joined the _Less Wrong_ mod team) on his post on ["Causal Reality _vs_. Social Reality"](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality). One section of the post asks, "Why people aren't clamoring in the streets for the end of sickness and death?" and gives the answer that it's because no one else is; people live in a social reality that accepts death as part of the natural order, even though life extension seems like it should be physically possible in causal reality.
+
+I didn't think this was a good example. "Clamoring in the streets" (even if you interpreted it as a metonym for other forms of mass political action) seemed like the kind of thing that would be recommended by social-reality thinking, rather than causal-reality thinking. How, causally, would the action of clamoring in the streets lead to the outcome of the end of sickness and death? I would expect means–end reasoning about causal reality to instead recommend things like working on or funding biomedical research.
+
+Ruby [complained that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=7b2pWiCL33cqhTabg) my tone was too combative, and asked for more charity and collaborative truth-seeking[^collaborative-truth-seeking] in any future comments.
+
+[^collaborative-truth-seeking]: [No one ever seems to be able to explain to me what this phrase means.](https://www.lesswrong.com/posts/uvqd3YiBcrPxXzxQM/what-does-the-word-collaborative-mean-in-the-phrase)
+
+(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the _Less Wrong_ FAQ. Maybe he couldn't let me "win" again so quickly?)
+
+I emailed the coordination group about it, on the grounds that gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices:
+
+> he seems to be conflating transhumanist optimism with "causal reality", and then tone-policing me when I try to model good behavior of what means-end reasoning about causal reality actually looks like. This ... seems pretty cultish to me?? Like, it's fine and expected for this grade of confusion to be on the website, but it's more worrisome when it's coming from the mod team.[^rot-13]
+
+[^rot-13]: This part of the email was actually [rot-13'd](https://rot13.com) to let people write up their independent component without being contaminated by me; I reproduce the plaintext here.
+
+The meta-discussion on _Less Wrong_ started to get heated. Ruby claimed:
+
+> [I]f the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. [...]
+>
+> [...]
+>
+> Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express "I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] There's an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't.
+
+"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email.
+
+(I would later complain to Anna (Subject: "uh, guys???", 20 July 2019) that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from _veteran_ CfAR participants, what was CfAR _for_?)
+
+[I replied to Ruby that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether or not you respect them as a thinker is _off-topic_. "You said X, but this is wrong because of Y" isn't a personal attack!
+
+Jessica said that there's no point in getting mad at [MOPs](http://benjaminrosshoffman.com/construction-beacons/). I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was _explicitly_ about causal _vs._ social reality; it's possible that I wouldn't be inclined to be such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt.
+
+Jessica ended up writing a post, ["Self-Consciousness Wants Everything to Be About Itself"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself), arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings, using as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community.
+
+Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in an apology to me](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post), and [an exchange between Raemon (also a mod) and Ruby that she thought was "surprisingly okay"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself?commentId=EW3Mom9qfoggfBicf).
+
+From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it could help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better.
+
+Michael said that part of the reason this worked was because it represented a clear threat to scapegoat, while also _not_ scapegoating, and not surrendering the option to do so later; it was significant that Jessica's choice of example positioned her on the side of the powerful social-justice coalition.
+
+------
+
+On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by [Ben's request back in March](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#alter-the-beacon) that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it.
+
+-------
+
+Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent popularity of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project.
+
+(Remember, this was 2019. After seeing what GPT-3/PaLM/DALL-E/_&c._ could do during the ["long May 2020"](https://twitter.com/MichaelTrazzi/status/1635871679133130752), it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for.)
+
+I still sympathized with the "mainstream" pushback against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a _boring_ semantic argument, but I feared that until we invented better linguistic technology, the _boring_ semantic argument was going to _continue_ sucking up discussion bandwidth with others when it didn't need to.
+
+"Am I being too tone-policey here?" I asked the coordination group. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!")
+
+Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp."
+
+I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is probably going to have unlucky analogues in nearby possible worlds who are guilty of vehicular manslaughter.) The manslaughter case was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the extremely weak principle that intent matters, sometimes.
+
+[^manslaughter-disanalogy]: For one extremely important disanalogy, perps don't _gain_ from committing manslaughter.
+
+Ben replied that what mattered in the determination of manslaughter _vs._ murder was whether there was long-horizon optimization power towards the outcome of someone's death; it wasn't about what sentiments the killer rehearsed in their working memory.
+
+On a phone call, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope knew (he _had_ to know, at some level) that it asn't true. (I agreed that this usage of _fraud_ made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that.
+
+------
+
+Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft _Less Wrong_ post by some of our posse members and some of the _Less Wrong_ mods. (The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party [GreaterWrong](https://www.greaterwrong.com/) site; I [filed a bug](https://github.com/LessWrong2/Lesswrong2/issues/2161).)
+
+Ben wrote:
+
+> What I see as under threat is the ability to say in a way that's actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed _primarily_ as moves to attack some person or institution, pushing them into the outgroup. Frequently, people suggest to me an "equivalent" wording with a softer tone, which in fact omits important substantive criticisms I mean to make, while claiming to understand what's at issue.
+
+Ray Arnold (another _Less Wrong_ mod) replied:
+
+> My core claim is: "right now, this isn't possible, without a) it being heard by many people as an attack, b) without people having to worry that other people will see it as an attack, even if they don't."
+>
+> It seems like you see this something as _"there's a precious thing that might be destroyed"_ and I see it as _"a precious thing does not exist and must be created, and the circumstances in which it can exist are fragile."_ It might have existed in the very early days of LessWrong. But the landscape now is very different than it was then. With billions of dollars available and at stake, what worked then can't be the same thing as what works now.
+
+(!!)[^what-works-now]
+
+[^what-works-now]: Arnold qualifies this in the next paragraph:
+
+ > [in public. In private things are much easier. It's _also_ the case that private channels enable collusion—that was an update [I]'ve made over the course of the conversation. ]
+
+ Even with the qualifier, I still think this deserves a "(!!)".
+
+Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene was still just a philosophy club, catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it strictly _more_ important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory action, which philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately _punish_ people for getting caught up in obfuscatory patterns—that would just increase the incentive to obfuscate—but we did need some way to reveal what was going on.
+
+In email, Jessica acknowledged that Ray had a point that it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate, "You don't have the option of non-engagement with the complaints that are being made." (Courts can _summon_ people; you can't ignore a court summons the way you can ignore ordinary critics.)
+
+Michael said that we should also develop skill in using social-justicey blame language, as was used against us, harder, while we were still acting under mistake-theoretic assumptions. "Riley" said that this was a terrifying you-have-become-the-abyss suggestion; Ben thought it was obviously a good idea.
+
+I was pretty horrified by the extent to which _Less Wrong_ moderators (!!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic Something to Protect, as a simple matter of Bay Area political correctness; I was happy to have Michael/Ben/Jessica as allies, but I wasn't _seeing_ the Blight as a unified problem. Now ... I was seeing _something_.
+
+An in-person meeting was arranged on 23 July 2019 at the _Less Wrong_ office, with Ben, Jessica, me, and most of the _Less Wrong_ team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to recount it.[^memory] I ended up crying at one point and left the room for a while.
+
+[^memory]: An advantage of the important parts of my life taking place on the internet is that I have _logs_ of the important things; I'm only able to tell this Whole Dumb Story with as much fidelity as I am, because for most of it, I can go back and read the emails and chatlogs from the time. Now that [audio transcription has fallen to AI](https://openai.com/blog/whisper/), should I be recording more real-life conversations?? In this case of this meeting, supposedly one of the _Less Wrong_ guys was recording, but no one had it when I asked in October 2022.
+
+The next day, I asked Ben and Jessica for their takeaways via email (Subject: "peace talks outcome?"). Jessica said that I was a "helpful emotionally expressive and articulate victim", that there seemed to be a consensus that it would be better if people like me could be warned somehow that _Less Wrong_ wasn't doing the general sanity-maximization thing anymore. (Because community leaders were willing to sacrifice, for example, ability to discuss non-AI heresies, in order to focus on sanity about AI in particular while maintaining enough mainstream acceptability and power.)
+
+I said that for me and my selfish perspective, the main outcome was finally shattering my "rationalist" social identity. I needed to exhaust all possible avenues of appeal before it became real to me. The morning after was the first for which "rationalists ... them" felt more natural than "rationalists ... us".
+
+-------
+
+Michael's reputation in "the community", already not what it once was, continued to be debased even further.
+
+The local community center, the Berkeley REACH,[^reach-acronym-expansion] was conducting an investigation as to whether to exclude Michael (which was mostly moot, as he didn't live in the Bay Area anyway). When I heard that the subcommittee conducting the investigation was "very close to releasing a statement", I wrote to them:
+
+[^reach-acronym-expansion]: Rationality and Effective Altruism Community Hub
+
+> I've been collaborating with Michael a lot recently, and I'm happy to contribute whatever information I can to make the report more accurate. What are the charges?
+
+They replied:
+
+> To be clear, we are not a court of law addressing specific "charges." We're a subcommittee of the Berkeley REACH Panel tasked with making decisions that help keep the space and the community safe.
+
+I replied:
+
+> Allow me to rephrase my question about charges. What are the reasons that the safety of the space and the community require you to write a report about Michael? To be clear, a community that excludes Michael on inadequate evidence is one where _I_ feel unsafe.
+
+We arranged a call, during which I angrily testified that Michael was no threat to the safety of the space and the community—which would have been a bad idea if it were the cops, but in this context, I figured my political advocacy couldn't hurt.
+
+Concurrently, I got into an argument with Kelsey Piper about Michael, after she had written on Discord that her "impression of _Vassar_'s threatening schism is that it's fundamentally about Vassar threatening to stir shit up until people stop socially excluding him for his bad behavior". I didn't think that was what the schism was about (Subject: "Michael Vassar and the theory of optimal gossip").
+
+In the course of litigating Michael's motivations (the details of which are not interesting enough to summarize), Kelsey mentioned that she thought Michael had done immense harm to me: that my models of the world and ability to reason were worse than they were a year ago. I thanked her for the concern, and asked if she could be more specific.
+
+She said she was referring to my ability to predict consensus and what other people believe. I expected arguments to be convincing to other people which the other people found, not just not convincing, but also so obviously not convincing that it was confusing I bothered raising them. I believed things to be in obvious violation of widespread agreement, when everyone else thought it wasn't. My shocked indignation at other people's behavior indicated a poor model of social reality.
+
+I considered this an insightful observation about a way in which I'm socially retarded.
+
+I had had [similar](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/) [problems](http://zackmdavis.net/blog/2012/07/trying-to-buy-a-lamp/) [with](http://zackmdavis.net/blog/2012/12/draft-of-a-letter-to-a-former-teacher-which-i-did-not-send-because-doing-so-would-be-a-bad-idea/) [school](http://zackmdavis.net/blog/2013/03/strategy-overhaul/). We're told that the purpose of school is education (to the extent that most people think of _school_ and _education_ as synonyms), but the consensus behavior is "sit in lectures and trade assignments for grades." Faced with what I saw as a contradiction between the consensus narrative and the consensus behavior, I would assume that the narrative was the "correct" version, and so I spent a lot of time trying to start conversations about math with everyone and then getting outraged and indignant when they'd say, "What class is this for?" Math isn't for classes; it's the other way around, right?
+
+Empirically, not right! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are [mistakenly](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) failing to live up to the narrative" and "[Everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing.