Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist that it's OK to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is OK ... why doesn't that just generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed much crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then, why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for my sanity! Did that matter at all? Also, Scott had asked me if it wouldn't be embarrassing, if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff? But the _original reason_ it had ever seemed _remotely_ plausible that we would create Utopia forever wasn't "because we're us, the self-designated world-saving good guys", but because we were going to perfect an art of _systematically correct reasoning_. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the _reason_ that it was plausible that we would create Utopia forever; you can't just forfeit the mandate of Heaven like that and still expect to still rule China. Also, Scott had proposed a super-Outside View of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes in people, and suggested that I think of _that_ was what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harrass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had the culture-war PTSD that he propsed, _and_ that my concerns have merit.
-[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits with her son—I mean, her son at the time
+[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits
One of the other friends I had cc'd on some of the emails came to visit me with her young son—I mean, her son at the time.
-
]
(Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.)
I think at some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But then when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. (Michael seemed to have a theory that people will only change their bad behavior when they see a victim who is being harmed.)
-[TODO:
-> Zack, for you specifically, no making yourself mentally ill to try to respond to the "gerrymandering categories for trans people's mental health" argument. That only works if you have a lot of people doing it in a visibly coordinated way, and you don't.
-> And even if it works, I don't think getting into a dysphoria contest with a bunch of trans people leads anywhere good.
-]
-
I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me was just crazy.)
+Ben pointed out that [making oneself mentally ill in order to extract political concessions](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) only works if you have a lot of people doing it in a visibly coordinated way. And even if it did work, getting into a dysphoria contest with trans people didn't seem like it led anywhere good.
+
Was the answer just that I needed to accept that there wasn't such a thing in the world as a "rationalist community"? (Sarah had told me as much two years ago, at BABSCon, and I just hadn't made the corresponing mental adjustments.)
-On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could just expect people to have read. I could spontaneously use [the phrase "Absolute Denial Macro"](https://www.lesswrong.com/posts/t2NN6JwMFaqANuLqH/the-strangest-thing-an-ai-could-tell-you) in conversation and expect to be understood. That's a massive "home field advantage." If I just gave up on "rationalists" being a thing, and go out in the world to make intellectual friends elsewhere (by making friends with _Quillette_ readers or arbitrary University of Chicago graduates), then I would lose all that accumulated capital.
+On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could just expect people to have read. I could spontaneously use [the phrase "Absolute Denial Macro"](https://www.lesswrong.com/posts/t2NN6JwMFaqANuLqH/the-strangest-thing-an-ai-could-tell-you) in conversation and expect to be understood. That's a massive "home field advantage." If I just gave up on the "rationalists" being a thing, and went out into the world to make friends with _Quillette_ readers or arbitrary University of Chicago graduates, then I would lose all that accumulated capital.
The language I spoke was _mostly_ educated American English, but I relied on subculture dialect for a lot. My sister has a chemistry doctorate from MIT (and so speaks the language of STEM intellectuals generally), and when I showed her ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), she reported finding it somewhat hard to read, likely because I casually use phrases like "thus, an excellent [motte](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/)", and expect to be understood without the reader taking 10 minutes to read the link. That essay, which was me writing from the heart in the words that came most naturally to me, could not be published in _Quillette_. The links and phraseology were just too context-bound.
-Maybe that's why I felt like I had to stand my ground and fight a culture war to preserve the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions.
+Maybe that's why I felt like I had to stand my ground and fight for the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions.
[TODO SECTION: proton concession
* as it happened, the next day, Wednesday, we got this: https://twitter.com/ESYudkowsky/status/1108277090577600512 (Why now? maybe he saw the "tools have shattered in their hand"; maybe the Quillette article just happened to be timely)
[TODO section: wrapping up with Scott; Kelsey; high and low Church https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
-[TODO small section: Ben reiterated that the most important thing was explaining why I've written them off; self-promotion imposes a cost on others; Jessica on creating clarity; Michael on less precise is more violent]
-[TODO small section: with Ben and Anna about monastaries]
-[TODO small section: concern about bad faith nitpicking]
+[SECTION: treachery and faith
+
+I concluded that further email prosecution was not useful at this time. My revised Category War to-do list was:
+
+ * Send a _brief_ wrapping-up/end-of-conversation email to Scott (with the anecdote from Discord and commentary on his orc story).
+ * Mentally write-off Scott, Eliezer, and the so-called "rationalist" community as a loss so that I wouldn't be in horrible emotional pain from cognitive dissonance all the time.
+ * Write up the long, engaging, depoliticized mathy version of the categories argument for _Less Wrong_ (which I thought might take a few months—I had a dayjob, and write slowly, and might need to learn some new math, which I'm also slow at).
+ * _Then_ email the link to Scott and Eliezer asking for a signal-boost and/or court ruling.
+
+Ben didn't think the categories argument was the most important thing for
+
+
+
+
+(Subject: "treachery, faith, and the great river (was: Re: DRAFTS: 'wrapping up; or, Orc-ham's razor' and 'on the power and efficacy of categories')"
+
+
+]
+
+
+[SECTION: about monastaries—
+
+"Getting the right answer in public on topic _X_ would be too expensive, so we won't do it" is _less damaging_ when the set of such <em>X</em>es is _small_. It looked to me like we added a new forbidden topic in the last ten years, without rolling back any of the old ones.
+
+"Reasoning in public is too expensive; reasoning in private is good enough" is _less damaging_ when there's some sort of _recruiting pipeline_ from the public into the monasteries: lure young smart people in with entertaining writing and shiny math, _then_ gradually undo their brainwashing once they've already joined your cult. (It had [worked on me](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/)!)
+
+I would be sympathetic to "rationalist" leaders like Anna or Yudkowsky playing that strategy if there were some sort of indication that they had _thought_, at all, about the pipeline problem—or even an indication that there _was_ an intact monastery somewhere.
+
+]
+
+[TODO small section: concern about bad faith nitpicking—
+
+One reason someone might be reluctant to correct mistakes when pointed out, is the fear that such a policy could be abused by motivated nitpickers. It would be pretty annoying to be obligated to churn out an endless stream of trivial corrections by someone motivated to comb through your entire portfolio and point out every little thing you did imperfectly, ever.
-[TODO: Jessica's acquaintance said I should have known that wouldn't work]
+I wondered if maybe, in Scott or Eliezer's mental universe, I was a blameworthy (or pitiably mentally ill) nitpicker for flipping out over a blog post from 2014 (!) and some Tweets (!!) from November. Like, really? I, too, had probably said things that were wrong _five years ago_.
+
+But, well, I thought I had made a pretty convincing that a lot of people are making a correctable and important rationality mistake, such that the cost of a correction (about the philosophy of language specifically, not any possible implications for gender politics) would actually be justified here. If someone had put _this much_ effort into pointing out an error I had made four months or five years ago and making careful arguments for why it was important to get the right answer, I think I _would_ put some serious thought into it rather than brushing them off.
+
+]
+
+[TODO: Jessica on corruption—
+
+> I am reminded of someone who I talked with about Zack writing to you and Scott to request that you clarify the category boundary thing. This person had an emotional reaction described as a sense that "Zack should have known that wouldn't work" (because of the politics involved, not because Zack wasn't right). Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused.
+
+]
-[TODO: Ben on Eliza the bot therapist analogy
[TODO: after some bouncing off the posse, what was originally an email draft became a public _Less Wrong_ post, "Where to Draw the Boundaries?" (note, plural)
* Wasn't the math overkill?
* math is important for appeal to principle—and as intimidation https://slatestarcodex.com/2014/08/10/getting-eulered/
* four simulacra levels got kicked off here
* no politics! just philosophy!
- * Ben on Michael on whether we are doing politics; "friendship, supplication, and economics"
* I could see that I'm including subtext and expecting people to only engage with the text, but if we're not going to get into full-on gender-politics on Less Wrong, but gender politics is motivating an epistemology error, I'm not sure what else I'm supposed to do! I'm pretty constrained here!
- *
* I had already poisoned the well with "Blegg Mode" the other month, bad decision
* We lost?! How could we lose??!!?!?
]