+_Even if_ you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Janie a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations—about Janie's moral character, about the suffering of victim whose hopes and dreams were cut short, about Janie's relationship with the law, _&c._—most of which get violated when you subsequently reveal that the murder victim was a four-week-old fetus.
+
+Thus, we see that Alexander's own "The Worst Argument in the World" is really complaining about the _same_ category-gerrymandering move that his "... Not Man for the Categories" comes out in favor of. We would not let someone get away with declaring, "I ought to accept an unexpected abortion or two deep inside the conceptual boundaries of what would normally not be considered murder if it'll save someone's life." Maybe abortion _is_ wrong, but you need to make that case _on the merits_, not by linguistic fiat.
+
+... Scott still didn't get it. He said that he didn't see why he shouldn't accept one unit of categorizational awkwardness in exchange for sufficiently large utilitarian benefits. I started drafting a long reply—but then I remembered that in recent discussion with my posse about what we might have done wrong in our attempted outreach to Yudkowsky, the idea had come up that in-person meetings are better for updateful disagreement-resolution. Would Scott be up for meeting in person some weekend? Non-urgent. Ben would be willing to moderate, unless Scott wanted to suggest someone else, or no moderator.
+
+... Scott didn't want to meet. At this point, I considered resorting to the tool of cheerful prices again, which I hadn't yet used against Scott—to say, "That's totally understandable! Would a financial incentive change your decision? For a two-hour meeting, I'd be happy to pay up to $4000 to you or your preferred charity. If you don't want the money, then sure, yes, let's table this. I hope you're having a good day." But that seemed sufficiently psychologically coercive and socially weird that I wasn't sure I wanted to go there. I emailed my posse asking what they thought—and then added that maybe they shouldn't reply until Friday, because it was Monday, and I really needed to focus on my dayjob that week.
+
+This is the part where I began to ... overheat. I tried ("tried") to focus on my dayjob, but I was just _so angry_. Did Scott _really_ not understand the rationality-relevant distinction between "value-dependent categories as a result of only running your clustering algorithm on the subspace of the configuration space spanned by the variables that are relevant to your decisions" (as explained by the _dagim_/water-dwellers _vs._ fish example) and "value-dependent categories _in order to not make my friends sad_"? I thought I was pretty explicit about this? Was Scott _really_ that dumb?? Or is it that he was only verbal-smart and this is the sort of thing that only makes sense if you've ever been good at linear algebra?? Did I need to write a post explaining just that one point in mathematical detail? (With executable code and a worked example with entropy calculations.)
+
+My dayjob boss made it clear that he was expecting me to have code for my current Jira tickets by noon the next day, so I resigned myself to stay at the office late to finish that.
+
+But I was just in so much (psychological) pain. Or at least—as I noted in one of a series of emails to my posse that night—I felt motivated to type the sentence, "I'm in so much (psychological) pain." I'm never sure how to intepret my own self-reports, because even when I'm really emotionally trashed (crying, shaking, randomly yelling, _&c_.), I think I'm still noticeably _incentivizable_: if someone were to present a credible threat (like slapping me and telling me to snap out of it), then I would be able to calm down: there's some sort of game-theory algorithm in the brain that subjectively feels genuine distress (like crying or sending people too many hysterical emails) but only when it can predict that it will be either rewarded with sympathy or at least tolerated. (Kevin Simler: [tears are a discount on friendship](https://meltingasphalt.com/tears/).)
+
+I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) to summarize how I felt (the mention of @ESYudkowsky being to attribute credit; I figured Yudkowsky had enough followers that he probably wouldn't see a notification):
+
+> "—and if you still have something to protect, so that you MUST keep going, and CANNOT resign and wisely acknowledge the limitations of rationality— [1/3]
+>
+> "—then you will be ready to start your journey[.] To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught. [2/3]
+>
+> "No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand." —@ESYudkowsky (https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science) [end/3]
+
+Only it wasn't quite appropriate. The quote is about failure resulting in the need to invent new methods of rationality, better than the ones you were taught. But ... the methods I had been taught were great! I don't have a pressing need to improve on them! I just couldn't cope with everyone else having _forgotten!_
+
+I did, eventually, get some dayjob work done that night, but I didn't finish the whole thing my manager wanted done by the next day, and at 4 _a.m._, I concluded that I needed sleep, the lack of which had historically been very dangerous for me (being the trigger for my [2013](http://zackmdavis.net/blog/2013/04/prodrome/) and [2017](/2017/Mar/fresh-princess/) psychotic breaks and subsequent psych imprisonments). We didn't want another bad outcome like that; we really didn't. There was a couch in the office, and probably another four hours until my coworkers started to arrive. The thing I needed to do was just lie down on the couch in the dark and have faith that sleep will come. Meeting my manager's deadline wasn't _that_ important. When people come in to the office, I might ask for help getting an Uber home? Or help buying melatonin? The important thing was to be calm.
+
+I sent an email explaining this to Scott and my posse and two other friends (Subject: "predictably bad ideas").
+
+Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist that it's OK to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is OK ... why doesn't that just generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed much crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then, why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for my sanity! Did that matter at all? Also, Scott had asked me if it wouldn't be embarrassing, if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff? But the _original reason_ it had ever seemed _remotely_ plausible that we would create Utopia forever wasn't "because we're us, the self-designated world-saving good guys", but because we were going to perfect an art of _systematically correct reasoning_. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the _reason_ that it was plausible that we would create Utopia forever; you can't just forfeit the mandate of Heaven like that and still expect to still rule China. Also, Scott had proposed a super-Outside View of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes in people, and suggested that I think of _that_ was what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harrass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had the culture-war PTSD that he propsed, _and_ that my concerns have merit.
+
+[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits with her son—I mean, her son at the time]
+
+(Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.)
+
+I think at some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But then when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. (Michael seemed to have a theory that people will only change their bad behavior when they see a victim who is being harmed.)
+
+I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me was just crazy.)
+
+Was the answer just that I needed to accept that there wasn't such a thing in the world as a "rationalist community"? (Sarah had told me as much two years ago, at BABSCon, and I just hadn't made the corresponing mental adjustments.)
+
+On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could just expect people to have read. I could spontaneously use the phrase "Absolute Denial Macro" in conversation and expect to be understood. That's a massive "home field advantage." If I just gave up on "rationalists" being a thing, and go out in the world to make intellectual friends elsewhere (by making friends with _Quillette_ readers or arbitrary University of Chicago graduates), then I would lose all that accumulated capital.
+
+The language I spoke was _mostly_ educated American English, but I relied on subculture dialect for a lot. My sister has a chemistry doctorate from MIT (and so speaks the language of STEM intellectuals generally), and when I showed her ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), she reported finding it somewhat hard to read, likely because I casually use phrases like "thus, an excellent [motte](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/)", and expect to be understood without the reader taking 10 minutes to read the link. That essay, which was me writing from the heart in the words that came most naturally to me, could not be published in _Quillette_. The links and phraseology were just too context-bound.
+
+Maybe that's why I felt like I had to stand my ground and fight a culture war to preserve the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions.
+
+[TODO SECTION: proton concession
+ * as it happened, the next day, Wednesday, we got this: https://twitter.com/ESYudkowsky/status/1108277090577600512 (Why now? maybe he saw the "tools have shattered in their hand"; maybe the Quillette article just happened to be timely)
+ * A concession! In the war frame, you'd think this would make me happy
+ * "I did you a favor by Tweeting something obliquely favorable to your object-level crusade, and you repay me by criticizing me? How dare you?!" My model of Sequences-era Eliezer-2009 would never do that, because the species-typical arguments-as-social-exchange
+ * do you think Eliezer is thinking, "Fine, if I tweet something obliquely favorable towards Zack's object-level agenda, maybe Michael's gang will leave me alone now"
+ * If there's some other reason you suspect there might by multiple species of dysphoria, but you tell people your suspicion is because dysphoria has more than one proton, then you're still kind of misinforming them for political reasons, which is the generalized problem that we're worried about?
+ * Michael's take: not worth the digression; we need to confront the actual crisis
+ * We need to figure out how to win against bad faith arguments
+]
+
+[TODO: Jessica joins the coalition; she tell me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards); Michael said that me and Jess together have more moral authority]
+
+[TODO: wrapping up with Scott; Kelsey; high and low Church https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
+
+[TODO: Ben reiterated that the most important thing was explaining why I've written them off; self-promotion imposes a cost on others; Jessica on creating clarity; Michael on less precise is more violent]
+
+[TODO: after some bouncing off the posse, what was originally an email draft became a public _Less Wrong_ post, "Where to Draw the Boundaries?" (note, plural)
+ * Wasn't the math overkill?
+ * math is important for appeal to principle—and as intimidation https://slatestarcodex.com/2014/08/10/getting-eulered/
+ * four simulacra levels got kicked off here
+ * no politics! just philosophy!
+ * Ben on Michael on whether we are doing politics; "friendship, supplication, and economics"
+ * I could see that I'm including subtext and expecting people to only engage with the text, but if we're not going to get into full-on gender-politics on Less Wrong, but gender politics is motivating an epistemology error, I'm not sure what else I'm supposed to do! I'm pretty constrained here!
+ * I had already poisoned the well with "Blegg Mode" the other month, bad decision
+ * We lost?! How could we lose??!!?!?
+]
+
+------
+
+
+[TODO: I was floored; math and wellness month
+ Anna doesn't want money from me
+ scuffle on "Yes Requires the Possibility of No"
+ LessWrong FAQ https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk ]