-Anyway, I wasn't the only one whose life was being disrupted by political drama in early 2019. On 22 February, Scott Alexander [posted that the /r/slatestarcodex Culture War Thread was being moved](https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/) to a new non–_Slate Star Codex_–branded subreddit in the hopes that it would hope help curb some of the harrassment he had been receiving. According to poll data and Alexander's personal impressions, the Culture War Thread featured a variety of ideologically diverse voices, but had nevertheless acquired a reputation as being a hive of right-wing scum and villainy.
-
-[Yudkowsky Tweeted](https://twitter.com/ESYudkowsky/status/1099134795131478017):
-
-> Your annual reminder that Slate Star Codex is not and never was alt-right, every real stat shows as much, and the primary promoters of this lie are sociopaths who get off on torturing incredibly nice targets like Scott A.
-
-I found Yudkowsky's use of the word "lie" here interesting given his earlier eagerness to police the use of the word "lie" by gender-identity skeptics. With the support of my posse, wrote to him again, a third time (Subject: "on defending against 'alt-right' categorization").
-
-Imagine if someone were to reply: "Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (explicit advocacy of a white ethnostate?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret. You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language like 'alt-right' being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to."
-
-How would Yudkowsky react, if someone said that? _My model_ of Sequences-era 2009!Yudkowsky would say, "This is an incredibly intellectually dishonest attempt to [sneak in connotations](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) by performing a categorization and trying to avoid the burden of having to justify it with an [appeal-to-arbitrariness conversation-halter](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters); go read ['A Human's Guide to Words.'](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb)"
-
-But I had no idea what 2019!Yudkowsky would say. If the moral of the "hill of meaning in defense of validity" thread had been that the word "lie" was reserved for _per se_ direct falsehoods, well, what direct falsehood was being asserted by Scott's detractors? I didn't think anyone is claiming that, say, Scott _identifies_ as alt-right (not even privately), any more than anyone is claiming that trans women have two X chromosomes.
-
-Commenters on /r/SneerClub had been pretty explicit in [their](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh0xlgx/) [criticism](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh3jrth/) that the Culture War thread harbored racists (_&c._) and possibly that Scott himself was a secret racist, _with respect to_ a definition of racism that includeed the belief that there are genetically-mediated population differences in the distribution of socially-relevant traits and that this probably has decision-relevant consequences that should be discussable somewhere.
-
-And this was just _correct_. For example, Alexander's ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) favorably cites Cochran _et al._'s genetic theory of Ashkenazi achievement as "really compelling." Scott was almost certainly "guilty" of the category-membership that the speech against him was meant to convey—it's just that Sneer Club got to choose the category. The correct response to the existence of a machine-learning classifer that returns positive on both Scott Alexander and Richard Spencer is not that the classifier is "lying" (what would that even mean?), but that the classifier is not very useful for understanding Scott Alexander's effects on the world.
-
-Of course, Scott was great, and we should defend him from the bastards trying to ruin his reputation, and it's plausible that the most politically convenient way to do that was to pound the table and call them lying sociopaths rather than engaging with the substance of their claims, much as how someone being tried under an unjust law might dishonestly plead "Not guilty" to save their own skin rather than tell the whole truth and hope for jury nullification.
-
-But, I argued, political convenience came at a dire cost to [our common interest](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). There was a proverb Yudkowsky [had once failed to Google](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings), which ran something like, "Once someone is known to be a liar, you might as well listen to the whistling of the wind."
-
-Similarly, once someone is known to [vary](https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) the epistemic standards of their public statements for political convenience—if they say categorizations can be lies when that happens to help their friends, but seemingly deny the possibility of categorizations being lies when that happens to make them look good politically ...
-
-Well, you're still _somewhat_ better off listening to them than the whistling of the wind, because the wind in various possible worlds is presumably uncorrelated with most of the things you want to know about, whereas [clever arguers](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence) who [don't tell explicit lies](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/) are constrained in how much they can mislead you. But it seems plausible that you might as well listen to any other arbitrary smart person with a bluecheck and 20K followers. I know you're very busy; I know your work's important—but it might be a useful exercise, for Yudkowsky to think of what he would _actually say_ if someone with social power _actually did this to him_ when he was trying to use language to reason about Something he had to Protect?
-
-(Note, my claim here is _not_ that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative. Rather, I'm saying that, as a matter of [local validity](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), whether "You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" makes sense _as a response to_ "X isn't a Y" shouldn't depend on the specific values of X and Y. Yudkowsky's behavior the other month made it look like he thought that "You're not standing in defense of truth if ..." _was_ a valid response when, say, X = "Caitlyn Jenner" and Y = "woman." I was saying that, whether or not it's a valid response, we should, as a matter of local validity, apply the _same_ standard when X = "Scott Alexander" and Y = "racist.")
-
-Anyway, without disclosing any _specific content_ from private conversations with Yudkowsky that may or may not have happened, I think I _am_ allowed to say that our posse did not get the kind of engagement from Yudkowsky that we were hoping for. (That is, I'm Glomarizing over whether Yudkowsky just didn't reply, or whether he did reply and our posse was not satisfied with the response.)
-
-Michael said that it seemed important that, if we thought Yudkowsky wasn't interested, we should have common knowledge among ourselves that we consider him to be choosing to be a cult leader.
-
-Meanwhile, my email thread with Scott got started back up again, although I wasn't expecting anything to come out of it. I expressed some regret that all the times I had emailed him over the past couple years had been when I was upset about something (like psych hospitals, or—something else) and wanted something from him, which was bad, because it was treating him as a means rather than an end—and then, despite that regret, continued prosecuting the argument.
-
-One of Alexander's [most popular _Less Wrong_ posts ever had been about the noncentral fallacy, which Alexander called "the worst argument in the world"](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world): for example, those who crow that abortion is _murder_ (because murder is the killing of a human being), or that Martin Luther King, Jr. was a _criminal_ (because he defied the segregation laws of the South), are engaging in a dishonest rhetorical maneuver in which they're trying to trick their audience into attributing attributes of the typical "murder" or "criminal" onto what are very noncentral members of those categories.
-
-_Even if_ you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Janie a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations—about Janie's moral character, about the suffering of victim whose hopes and dreams were cut short, about Janie's relationship with the law, _&c._—most of which get violated when you subsequently reveal that the murder victim was a four-week-old fetus.
-
-Thus, we see that Alexander's own "The Worst Argument in the World" is really complaining about the _same_ category-gerrymandering move that his "... Not Man for the Categories" comes out in favor of. We would not let someone get away with declaring, "I ought to accept an unexpected abortion or two deep inside the conceptual boundaries of what would normally not be considered murder if it'll save someone's life." Maybe abortion _is_ wrong and relevantly similar to the central sense of "murder", but you need to make that case _on the merits_, not by linguistic fiat.
-
-... Scott still didn't get it. He said that he didn't see why he shouldn't accept one unit of categorizational awkwardness in exchange for sufficiently large utilitarian benefits. I started drafting a long reply—but then I remembered that in recent discussion with my posse about what we might have done wrong in our attempted outreach to Yudkowsky, the idea had come up that in-person meetings are better for updateful disagreement-resolution. Would Scott be up for meeting in person some weekend? Non-urgent. Ben would be willing to moderate, unless Scott wanted to suggest someone else, or no moderator.
-
-... Scott didn't want to meet. At this point, I considered resorting to the tool of cheerful prices again, which I hadn't yet used against Scott—to say, "That's totally understandable! Would a financial incentive change your decision? For a two-hour meeting, I'd be happy to pay up to $4000 to you or your preferred charity. If you don't want the money, then sure, yes, let's table this. I hope you're having a good day." But that seemed sufficiently psychologically coercive and socially weird that I wasn't sure I wanted to go there. I emailed my posse asking what they thought—and then added that maybe they shouldn't reply until Friday, because it was Monday, and I really needed to focus on my dayjob that week.
-
-This is the part where I began to ... overheat. I tried ("tried") to focus on my dayjob, but I was just _so angry_. Did Scott _really_ not understand the rationality-relevant distinction between "value-dependent categories as a result of only running your clustering algorithm on the subspace of the configuration space spanned by the variables that are relevant to your decisions" (as explained by the _dagim_/water-dwellers _vs._ fish example) and "value-dependent categories _in order to not make my friends sad_"? I thought I was pretty explicit about this? Was Scott _really_ that dumb?? Or is it that he was only verbal-smart and this is the sort of thing that only makes sense if you've ever been good at linear algebra?? Did I need to write a post explaining just that one point in mathematical detail? (With executable code and a worked example with entropy calculations.)
-
-My dayjob boss made it clear that he was expecting me to have code for my current Jira tickets by noon the next day, so I resigned myself to stay at the office late to finish that.
-
-But I was just in so much (psychological) pain. Or at least—as I noted in one of a series of emails to my posse that night—I felt motivated to type the sentence, "I'm in so much (psychological) pain." I'm never sure how to intepret my own self-reports, because even when I'm really emotionally trashed (crying, shaking, randomly yelling, _&c_.), I think I'm still noticeably _incentivizable_: if someone were to present a credible threat (like slapping me and telling me to snap out of it), then I would be able to calm down: there's some sort of game-theory algorithm in the brain that subjectively feels genuine distress (like crying or sending people too many hysterical emails) but only when it can predict that it will be either rewarded with sympathy or at least tolerated. (Kevin Simler: [tears are a discount on friendship](https://meltingasphalt.com/tears/).)
-
-I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) to summarize how I felt (the mention of @ESYudkowsky being to attribute credit, I told myself; I figured Yudkowsky had enough followers that he probably wouldn't see a notification):
-
-> "—and if you still have something to protect, so that you MUST keep going, and CANNOT resign and wisely acknowledge the limitations of rationality— [1/3]
->
-> "—then you will be ready to start your journey[.] To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught. [2/3]
->
-> "No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand." —@ESYudkowsky (https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science) [end/3]
-
-Only it wasn't quite appropriate. The quote is about failure resulting in the need to invent new methods of rationality, better than the ones you were taught. But ... the methods I had been taught were great! I don't have a pressing need to improve on them! I just couldn't cope with everyone else having _forgotten!_
-
-I did, eventually, get some dayjob work done that night, but I didn't finish the whole thing my manager wanted done by the next day, and at 4 _a.m._, I concluded that I needed sleep, the lack of which had historically been very dangerous for me (being the trigger for my [2013](http://zackmdavis.net/blog/2013/04/prodrome/) and [2017](/2017/Mar/fresh-princess/) psychotic breaks and subsequent psych imprisonments). We didn't want another bad outcome like that; we really didn't. There was a couch in the office, and probably another four hours until my coworkers started to arrive. The thing I needed to do was just lie down on the couch in the dark and have faith that sleep will come. Meeting my manager's deadline wasn't _that_ important. When people come in to the office, I might ask for help getting an Uber home? Or help buying melatonin? The important thing was to be calm.
-
-I sent an email explaining this to Scott and my posse and two other friends (Subject: "predictably bad ideas").
-
-Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist that it's OK to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is OK ... why doesn't that just generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed much crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then, why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for my sanity! Did that matter at all? Also, Scott had asked me if it wouldn't be embarrassing, if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff? But the _original reason_ it had ever seemed _remotely_ plausible that we would create Utopia forever wasn't "because we're us, the self-designated world-saving good guys", but because we were going to perfect an art of _systematically correct reasoning_. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the _reason_ that it was plausible that we would create Utopia forever; you can't just forfeit the mandate of Heaven like that and still expect to still rule China. Also, Scott had proposed a super-Outside View of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes in people, and suggested that I think of _that_ was what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harrass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had the culture-war PTSD that he propsed, _and_ that my concerns have merit.
-
-[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits
-
-One of the other friends I had cc'd on some of the emails came to visit me with her young son—I mean, her son at the time.
-]
-
-(Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.)
-
-I think at some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But then when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. (Michael seemed to have a theory that people will only change their bad behavior when they see a victim who is being harmed.)
-
-I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me was just crazy.)
-
-Ben pointed out that [making oneself mentally ill in order to extract political concessions](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) only works if you have a lot of people doing it in a visibly coordinated way. And even if it did work, getting into a dysphoria contest with trans people didn't seem like it led anywhere good.
-
-Was the answer just that I needed to accept that there wasn't such a thing in the world as a "rationalist community"? (Sarah had told me as much two years ago, at BABSCon, and I just hadn't made the corresponing mental adjustments.)
-
-On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could just expect people to have read. I could spontaneously use [the phrase "Absolute Denial Macro"](https://www.lesswrong.com/posts/t2NN6JwMFaqANuLqH/the-strangest-thing-an-ai-could-tell-you) in conversation and expect to be understood. That's a massive "home field advantage." If I just gave up on the "rationalists" being a thing, and went out into the world to make friends with _Quillette_ readers or arbitrary University of Chicago graduates, then I would lose all that accumulated capital.