Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist that it's OK to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is OK ... why doesn't that just generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed much crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then, why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for my sanity! Did that matter at all? Also, Scott had asked me if it wouldn't be embarrassing, if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff? But the _original reason_ it had ever seemed _remotely_ plausible that we would create Utopia forever wasn't "because we're us, the self-designated world-saving good guys", but because we were going to perfect an art of _systematically correct reasoning_. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the _reason_ that it was plausible that we would create Utopia forever; you can't just forfeit the mandate of Heaven like that and still expect to still rule China. Also, Scott had proposed a super-Outside View of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes in people, and suggested that I think of _that_ was what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harrass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had the culture-war PTSD that he propsed, _and_ that my concerns have merit.
-[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits with her son—I mean, her son at the time]
+[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits with her son—I mean, her son at the time
+
+One of the other friends I had cc'd on some of the emails came to visit me with her young son—I mean, her son at the time.
+
+]
(Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.)
Was the answer just that I needed to accept that there wasn't such a thing in the world as a "rationalist community"? (Sarah had told me as much two years ago, at BABSCon, and I just hadn't made the corresponing mental adjustments.)
-On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could just expect people to have read. I could spontaneously use the phrase "Absolute Denial Macro" in conversation and expect to be understood. That's a massive "home field advantage." If I just gave up on "rationalists" being a thing, and go out in the world to make intellectual friends elsewhere (by making friends with _Quillette_ readers or arbitrary University of Chicago graduates), then I would lose all that accumulated capital.
+On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could just expect people to have read. I could spontaneously use [the phrase "Absolute Denial Macro"](https://www.lesswrong.com/posts/t2NN6JwMFaqANuLqH/the-strangest-thing-an-ai-could-tell-you) in conversation and expect to be understood. That's a massive "home field advantage." If I just gave up on "rationalists" being a thing, and go out in the world to make intellectual friends elsewhere (by making friends with _Quillette_ readers or arbitrary University of Chicago graduates), then I would lose all that accumulated capital.
The language I spoke was _mostly_ educated American English, but I relied on subculture dialect for a lot. My sister has a chemistry doctorate from MIT (and so speaks the language of STEM intellectuals generally), and when I showed her ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), she reported finding it somewhat hard to read, likely because I casually use phrases like "thus, an excellent [motte](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/)", and expect to be understood without the reader taking 10 minutes to read the link. That essay, which was me writing from the heart in the words that came most naturally to me, could not be published in _Quillette_. The links and phraseology were just too context-bound.
24 May: Anna seems to be regaining power of speech (Facebook post on U.S. decline)
2 Jun: I send an email to Cade Metz, who DMed me on Twitter
25 Jul: rubber-duck philosophy for "Unnatural Categories"!!
-[bookmark]
4 Sep: misguided by the hideousness of our weapons?! or, theory of universal algorithmic bad faith
+
> When I look at the world, it looks like [Scott](http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) and [Eliezer](https://twitter.com/ESYudkowsky/status/1067183500216811521) and [Kelsey](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) and [Robby Bensinger](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447&comment_tracking=%7B%22tn%22%3A%22R2%22%7D) seem to think that some variation on ["I can define a word any way I want"]() is sufficient to end debates on transgender identity.
> And ... I want to be nice to my trans friends, too, but that can't possibly be the definitive end-of-conversation correct argument. Not _really_. Not if you're being serious.
+editing tier—
+_ better explanation of posse formation
+_ Emperor Norton ordered Hayes executed
+_ address the "maybe it's good to be called names" point from "Hill" thread
+_ maybe quote Michael's Nov 2018 texts?
+_ the right way to explain how I'm respecting Yudkowsky's privacy
+_ clarify sequence of outreach attempts
+_ clarify existence of a shadow posse member
+_ mention Nov. 2018 conversation with Ian somehow
+_ Said on Yudkowsky's retreat to Facebook being bad for him
+_ Discord discourse with Alicorner
+_ screenshot Rob's Facebook comment which I link
+_ explain first use of Center for Applied Rationality
+_ erasing agency of Michael's friends, construed as a pawn
+_ Anna thought badmouthing Michael was OK by Michael's standards
+
+people to consult before publishing, for feedback or right of objection—
+_ Iceman
+_ Ben/Jessica
+_ Scott
+_ Anna
+_ secret posse member
+_ someone from Alicorner #drama as a hostile prereader
+(probably don't bother with Michael?)
+
+
+-------
+
The thing about our crowd is that we have a lamentably low proportion of women (13.3% cis women in the last community survey) and—I don't know when this happened; it certainly didn't feel like this back in 'aught-nine—an enormous number of trans women relative to population base rates (2.7%, for a cis-to-trans ratio of 4.9!!), the vast majority of whom I expect to be AGP
https://slatestarscratchpad.tumblr.com/post/142995164286/i-was-at-a-slate-star-codex-meetup. "We are solving the gender ratio issue one transition at a time"
Really, self-respecting trans people who care about logical consistency should abhor Scott and Eliezer's opinions—you should want people to use the right pronouns _because_ of your gender soul or _because_ your transition actually worked, not because categories are flexible and pronouns shouldn't imply gender
+https://twitter.com/ESYudkowsky/status/1435605868758765568
+> Because it was you, I tried to read this when it came out. But you do not know how to come to a point, because you are too horrified by the thought that a reader might disagree with you if you don't write even more first; so then I started skimming, and then I gave up.
+
> If you think you can win a battle about 2 + 3 = 5, then it can feel like victory or self-justification to write a huge long article hammering on that; but it doesn't feel as good to engage with how the Other does not think they are arguing 2 + 3 = 6, they're talking about 2 * 3.
https://twitter.com/ESYudkowsky/status/1435618825198731270
The McGongall turning into a cat parody may actually be worth fitting in—McCongall turning into a cat broke Harry's entire worldview. Similarly, the "pretend to turn into a cat, and everyone just buys it" maneuver broke my religion
* https://everythingtosaveit.how/case-study-cfar/#attempting-to-erase-the-agency-of-everyone-who-agrees-with-our-position
+
+Michael on EA suppressing credible criticism https://twitter.com/HiFromMichaelV/status/1559534045914177538
\ No newline at end of file
-editing tier—
-_ better explanation of posse formation
-_ Emperor Norton ordered Hayes executed
-_ address the "maybe it's good to be called names" point from "Hill" thread
-_ maybe quote Michael's Nov 2018 texts?
-_ the right way to explain how I'm respecting Yudkowsky's privacy
-_ clarify sequence of outreach attempts
-_ clarify existence of a shadow posse member
-_ mention Nov. 2018 conversation with Ian somehow
-_ Said on Yudkowsky's retreat to Facebook being bad for him
-_ Discord discourse with Alicorner
-_ screenshot Rob's Facebook comment which I link
-_ explain first use of Center for Applied Rationality
-_ erasing agency of Michael's friends, construed as a pawn
-_ Anna thought badmouthing Michael was OK by Michael's standards
-
-
Urgent/needed for healing—
_ I Am Not Great With Secrets (aAL)
_ Reply to Scott Alexander on Autogenderphilia