+I wasn't, in fact, satisfied. This little "not ontologically confused" clarification buried in the replies was _much less visible_ than the bombastic, arrogant top level pronouncement insinuating that resistance to gender-identity claims _was_ confused. (1 Like on this reply, _vs._ 140 Likes/21 Retweets on start of thread.) I expected that the typical reader who had gotten the impression from the initial thread that the great Eliezer Yudkowsky thought that gender-identity skeptics didn't have a leg to stand on, would not, actually, be disabused of this impression by the existence of this little follow-up. Was it greedy of me to want something _louder_?
+
+Greedy or not, I wasn't done flipping out. On 1 December, I wrote to Scott Alexander (cc'ing a few other people), asking if there was any chance of an _explicit_ and _loud_ clarification or partial-retraction of ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (Subject: "super-presumptuous mail about categorization and the influence graph"). _Forget_ my boring whining about the autogynephilia/two-types thing, I said—that's a complicated empirical claim, and _not_ the key issue.
+
+The _issue_ is that category boundaries are not arbitrary (if you care about intelligence being useful): you want to [draw your category boundaries such that](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) things in the same category are similar in the respects that you care about predicting/controlling, and you want to spend your [information-theoretically limited budget](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) of short words on the simplest and most wide-rangingly useful categories.
+
+It's true that [the reason _I_ was continuing to freak out about this](/2019/Jul/the-source-of-our-power/) to the extent of sending him this obnoxious email telling him what to write (seriously, who does that?!) had to with transgender stuff, but wasn't the reason _Scott_ should care.
+
+The other year, Alexander had written a post, ["Kolmogorov Complicity and the Parable of Lightning"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/), explaining the consequences of political censorship by means of an allegory about a Society with the dogma that thunder occurs before lightning. The problem isn't so much the sacred dogma itself (it's not often that you need to _directly_ make use of the fact that thunder comes first), but that the need to _defend_ the sacred dogma _destroys everyone's ability to think_.
+
+It was the same thing here. It wasn't that I had any direct practical need to misgender anyone in particular. It still wasn't okay that trying to talk about the reality of biological sex to so-called "rationalists" got you an endless deluge of—polite! charitable! non-ostracism-threatening!—_bullshit nitpicking_. (What about [complete androgen insensitivity syndrome](https://en.wikipedia.org/wiki/Complete_androgen_insensitivity_syndrome)? Why doesn't this ludicrous misinterpretation of what you said [imply that lesbians aren't women](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/)? _&c. ad infinitum_.) With enough time, I thought the nitpicks could and should be satisfactorily answered. (Any ones that couldn't would presumably be fatal criticisms rather than bullshit nitpicks.) But while I was in the process of continuing to write all that up, I hoped Alexander could see why I feel somewhat gaslighted.
+
+(I had been told by others that I wasn't using the word "gaslighting" correctly. _Somehow_ no one seemed to think I had the right to define _that_ category boundary for my convenience.)
+
+If our vaunted rationality techniques result in me having to spend dozens of hours patiently explaining why I don't think that I'm a woman and that [the person in this photograph](https://daniellemuscato.startlogic.com/uploads/3/4/9/3/34938114/2249042_orig.jpg) isn't a woman, either (where "isn't a woman" is a _convenient rhetorical shorthand_ for a much longer statement about [naïve Bayes models](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) and [high-dimensional configuration spaces](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) and [defensible Schelling points for social norms](https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes)), then our techniques are _worse than useless_.
+
+If Galileo ever muttered "And yet it moves", there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense or some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in November 2014, but in retrospect, _maybe_ it was a _bad_ idea to build a [memetic superweapon](https://archive.is/VEeqX) that says that the number of epicycles _doesn't matter_.
+
+And the reason to write this as a desperate email plea to Scott Alexander when I could be working on my own blog, was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like Alexander and Yudkowsky rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who didn't [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the _Zeitgeist_, the winning move is "persuade Scott Alexander."
+
+Back in 2010, the rationalist community had a shared understanding that the function of language is to describe reality. Now, we didn't. If Scott didn't want to cite my creepy blog about my creepy fetish, that was _totally fine_; I _liked_ getting credit, but the important thing is that this "No, the Emperor isn't naked—oh, well, we're not claiming that he's wearing any garments—it would be pretty weird if we were claiming _that!_—it's just that utilitarianism implies that the _social_ property of clothedness should be defined this way because to do otherwise would be really mean to people who don't have anything to wear" gaslighting maneuver needed to _die_, and he alone could kill it.
+
+... Scott didn't get it.
+
+But I _did_ end up in more conversation with Michael Vassar, Ben Hoffman, and Sarah Constantin, who were game to help me with reaching out to Yudkowsky again to explain the problem in more detail. If we had this entire posse, I felt bad and guilty and ashamed about focusing too much on my special interest except insofar as it was geniunely a proxy for "Has Eliezer and/or everyone else lost the plot, and if so, how do we get it back?" But the group seemed to agree that my philosophy-of-language grievance was a useful test case for prosecuting deeper maladies affecting our subculture.
+
+There were times during these weeks where it felt like my mind shut down with the only thought, "What am I _doing_? This is _absurd_. Why am I running around picking fights about the philosophy of language—and worse, with me arguing for the _Bad_ Guys' position? Maybe I'm wrong and should stop making a fool out of myself. After all, using Aumann-like reasoning, in a dispute of 'me and Michael Vassar vs. _everyone fucking else_', wouldn't I want to bet on 'everyone else'? Obviously."
+
+Except ... I had been raised back in the 'aughts to believe that you're you're supposed to concede arguments on the basis of encountering a superior counterargument that makes you change your mind, and I couldn't actually point to one. "Maybe I'm making a fool out of myself by picking fights with all these high-status people" is _not a counterargument_.
+
+Meanwhile, Anna continued to be disinclined to take a side in the brewing Category War, and it was beginning to put a strain on our friendship, to the extent that I kept ending up crying at some point during our occasional meetings. She told me that my "You have to pass my philosophy-of-language litmus test or I lose all respect for you as a rationalist" attitude was psychologically coercive. I agreed—I was even willing to go up to "violent"—in the sense that I'd cop to [trying to apply social incentives towards an outcome rather than merely exchanging information](http://zackmdavis.net/blog/2017/03/an-intuition-on-the-bayes-structural-justification-for-free-speech-norms/). But sometimes you need to use violence in defense of self or property, even if violence is generally bad. If we think of the "rationalist" label as intellectual property, maybe it's property worth defending, and if so, then "I can define a word any way I want" isn't obviously a terrible time to start shooting at the bandits?
+
+My _hope_ was that it was possible to apply just enough "What kind of rationalist are _you_?!" social pressure to cancel out the "You don't want to be a Bad (Red) person, do you??" social pressure and thereby let people look at the arguments—though I wasn't sure if that actually works, and I was growing exhausted from all the social aggression I was doing about it. (If someone tries to take your property and you shoot at them, you could be said to be the "aggressor" in the sense that you fired the first shot, even if you hope that the courts will uphold your property claim later.)
+
+There's a view that assumes that as long as everyone is being cordial, our truthseeking public discussion must be basically on-track: if no one overtly gets huffily offended and calls to burn the heretic, then the discussion isn't being warped by the fear of heresy.
+
+I do not hold this view. I think there's a _subtler_ failure mode where people know what the politically-favored bottom line is, and collude to ignore, nitpick, or just be targetedly _uninterested_ in any fact or line of argument that doesn't fit the party line. I want to distinguish between direct ideological conformity enforcement attempts, and "people not living up to their usual epistemic standards in response to ideological conformity enforcement in the general culture they're embedded in."
+
+Especially compared to normal Berkeley, I had to give the Berkeley "rationalists" credit for being _very good_ at free speech norms. (I'm not sure I would be saying this in the world where Scott Alexander didn't have a traumatizing experience with social justice in college, causing him to dump a ton of anti-social-justice, pro-argumentative-charity antibodies in the "rationalist" collective "water supply" after he became our subculture's premier writer. But it was true in _our_ world.)
+
+I didn't want to fall into the [bravery-debate](http://slatestarcodex.com/2013/05/18/against-bravery-debates/) trap of, "Look at me, I'm so heroically persecuted, therefore I'm right (therefore you should have sex with me)". I wasn't angry at the "rationalists" for being silenced or shouted down (which I wasn't); I was angry at them for _making bad arguments_ and systematically refusing to engage with the obvious counterarguments when they're made.
+
+Ben thought I was wrong to think of this as non-ostracisizing. The deluge of motivated nitpicking _is_ an implied marginalization threat, he explained: the game people are playing when they do that is to force me to choose between doing arbitarily large amounts of interpretive labor, or being cast as never having answered these construed-as-reasonable objections, and therefore over time losing standing to make the claim, being thought of as unreasonable, not getting invited to events, _&c._
+
+I saw the dynamic he was pointing at, but as a matter of personality, I was more inclined to respond, "Welp, I guess I need to write faster and more clearly", rather than to say "You're dishonestly demanding arbitrarily large amounts of interpretive labor from me." I thought Ben was far too quick to give up on people who he modeled as trying not to understand, whereas I continued to have faith in the possibility of _making_ them understand if I just never gave up. Even _if_ the other person was being motivatedly dense, giving up wouldn't make me a stronger writer.
+
+(Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of interpretive labor!")
+
+Ben thought that making them understand was hopeless and that becoming a stronger writer was a boring goal; it would be a better use of my talents to jump up an additional meta level and explain _how_ people were failing to engage. That is, I had a model of "the rationalists" that kept making bad predictions. What's going on there? Something interesting might happen if I try to explain _that_.
+
+(I guess I'm only now, after spending an additional three years exhausting every possible line of argument, taking Ben's advice on this by writing this memoir. Sorry, Ben—and thanks.)
+
+One thing I regret about my behavior during this period was the extent to which I was emotionally dependent on my posse, and in some ways particularly Michael, for validation. I remembered Michael as a high-status community elder back in the _Overcoming Bias_ era (to the extent that there was a "community" in those early days). I had been somewhat skeptical of him, then: the guy makes a lot of stridently "out there" assertions by the standards of ordinary social reality, in a way that makes you assume he must be speaking metaphorically. (He always insists that he's being completely literal.) But he had social proof as the President of the Singularity Institute—the "people person" of our world-saving effort, to complement Yudkowsky's anti-social mad scientist personality—so I took his "crazy"-sounding assertions more seriously, more charitably than I would have in the absence of that social proof.
+
+Now, the memory of that social proof was a lifeline. Dear reader, if you've never been in the position of disagreeing with the entire weight of Society's educated opinion, _including_ your idiosyncratic subculture that tells itself a story about being smarter than the surrounding the Society—let me tell you, it's _stressful_. [There was a comment on /r/slatestarcodex around this time](https://old.reddit.com/r/slatestarcodex/comments/anvwr8/experts_in_any_given_field_how_would_you_say_the/eg1ga9a/) that cited Yudkowsky, Alexander, Ozy, _The Unit of Caring_, and Rob Bensinger as leaders of the "rationalist" community—just an arbitrary Reddit comment of no significance whatsoever—but it was salient indicator of the _Zeitgeist_ to me, because _[every](https://twitter.com/ESYudkowsky/status/1067183500216811521) [single](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) [one](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/) of [those](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) [people](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447)_ had tried to get away with some variant on the "categories are subjective, therefore you have no gounds to object to the claim that trans women are women" mind game.
+
+In the face of that juggernaut of received opinion, I was already feeling pretty gaslighted. ("We ... we had a whole Sequence about this. Didn't we? And, and ... [_you_ were there](https://tvtropes.org/pmwiki/pmwiki.php/Main/AndYouWereThere), and _you_ were there ... It—really happened, right? I didn't just imagine it? The [hyperlinks](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) [still](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) [work](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) ...")
+
+I don't know how my mind would have held up intact if I were just facing it alone; it's hard to imagine what I would have done in that case. I definitely wouldn't have had the impudence to pester Scott and Yudkowsky the way I did—_especially_ Yudkowsky—if it was just me against everyone else.
+
+But _Michael thought I was in the right_—not just intellectually on the philosophy issue, but morally in the right to be _prosecuting_ the philosophy issue, and not accepting stonewalling as an answer. That meant a lot to me.
+
+
+
+
+[TODO SECTION: Anna Michael feud
+ * This may have been less effective than it was in my head; _I remembered_ Michael as being high-status
+ * Anna's 2 Mar comment badmouthing Michael
+ * my immediate response: I strongly agree with your point about "ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can't) help fill these functions"! That's why I'm so heartbroken about the "categories are arbitrary, therefore trans women are women" thing, which deserves to be laughed out of the room.
+ * "sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys"
+ * Anna's case against Michael: he was talking to Devi even when Devi needed a break, and he wanted to destroy EA
+ * I remember at a party in 2015ish, asking Michael what else I should invest my money in, if not New Harvest/GiveWell, and his response was, "You"
+ * backstory of anti-EA sentiment: Ben's critiques, Sarah's "EA Has a Lying Problem"—Michael had been in the background
+ * Anna had any actual dirt on him, you'd expect her to use it while trashing him in public, but her only example basically amounts to "he gave people career advice I disagree with"
+ * "I should have noticed earlier that my emotional dependence on "Michael says X" validation is self-undermining, because Michael says that the thing that makes me valuable is my ability to think independently."
+ * fairly destructive move
+ * https://everythingtosaveit.how/case-study-cfar/#attempting-to-erase-the-agency-of-everyone-who-agrees-with-our-position
+ http://benjaminrosshoffman.com/why-i-am-no-longer-supporting-reach/
+ He ... flatters people? He ... _didn't_ tell people to abandon their careers? What?!
+]
+
+
+[TODO SECTION: RIP Culture War thread, and defense against alt-right categorization
+
+I wasn't the only one whose life was being disrupted by political drama in early 2019. On 22 February, Scott Alexander [posted that the /r/slatestarcodex Culture War Thread was being moved](https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/) to a new non–_Slate Star Codex_–branded subreddit in the hopes that it would hope help curb some of the harrassment he had been receiving. The problem with hosting an open discussion, Alexander explained, wasn't the difficulty of moderating obvious spam or advocacy of violence.
+
+> Your annual reminder that Slate Star Codex is not and never was alt-right, every real stat shows as much, and the primary promoters of this lie are sociopaths who get off on torturing incredibly nice targets like Scott A.
+
+ * Suppose the one were to reply: "Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (Scott Alexander's explicit advocacy of a white ethnostate?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret. You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language like 'alt-right' being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to."
+ * What direct falsehood is being asserted by Scott's detractors? I don't think anyone is claiming that, say, Scott identifies as alt-right (not even privately), any more than anyone is claiming that trans women have two X chromosomes. Sneer Club has been pretty explicit in their criticism
+ * examples:
+ * https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh0xlgx/
+ * https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh3jrth/
+ * that the Culture War thread harbors racists (&c.) and possibly that Scott himself is a secret racist, with respect to a definition of racism that includes the belief that there are genetically-mediated population differences in the distribution of socially-relevant traits and that this probably has decision-relevant consequences should be discussable somewhere.
+
+And this is just correct: e.g., "The Atomic Bomb Considered As Hungarian High School Science Fair Project" favorably cites Cochran et al.'s genetic theory of Ashkenazi achievement as "really compelling." Scott is almost certainly "guilty" of the category-membership that the speech against him is meant to convey—it's just that Sneer Club got to choose the category. The correct response to the existence of a machine-learning classifer that returns positive on both Scott Alexander and Richard Spencer is not that the classifier is "lying" (what would that even mean?), but that the classifier is not very useful for understanding Scott Alexander's effects on the world.
+
+Of course, Scott is great and we should defend him from the bastards trying to ruin his reputation, and it's plausible that the most politically convenient way to do that is to pound the table and call them lying sociopaths rather than engaging with the substance of their claims, much as how someone being tried under an unjust law might dishonestly plead "Not guilty" to save their own skin rather than tell the whole truth and hope for jury nullification.
+
+But political convenience comes at a dire cost to our common interest! There's a proverb you once failed to Google, which runs something like, "Once someone is known to be a liar, you might as well listen to the whistling of the wind."
+
+Similarly, once someone is known to vary the epistemic standards of their public statements for political convenience (even if their private, unshared thoughts continue to be consistently wise)—if they say categorizations can be lies when that happens to help their friends, but seemingly deny the possibility of categorizations being lies when that happens to make them look good ...
+
+Well, you're still somewhat better off listening to them than the whistling of the wind, because the wind in various possible worlds is presumably uncorrelated with most of the things you want to know about, whereas clever arguers who don't tell explicit lies are very constrained in how much they can mislead you. But it seems plausible that you might as well listen to any other arbitrary smart person with a blue check and 20K followers. I remain,
+ * (The claim is not that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative; it's about the _response_)
+ * "the degree to which category boundaries are being made a conscious and deliberate focus of discussion": it's a problem when category boundaries are being made a conscious and deliberate focus of discussion as an isolated-demand-for-rigor because people can't get the conclusion they want on the merits; I only started focusing on the hidden-Bayesian-structure-of-cognition part after the autogynephilia discussions kept getting derailed
+ * I know you're very busy; I know your work's important—but it might be a useful exercise? Just for a minute, to think of what you would actually say if someone with social power _actually did this to you_ when you were trying to use language to reason about Something you had to Protect?
+]
+
+
+
+
+
+
+Without disclosing any _specific content_ from private conversations with Yudkowsky that may or may not have happened, I think I am allowed to say that our posse did not get the kind of engagement from Yudkowsky that we were hoping for. (That is, I'm Glomarizing over whether Yudkowsky just didn't reply, or whether he did reply and our posse was not satisfied with the response.)
+
+Michael said that it seemed important that, if we thought Yudkowsky wasn't interested, we should have common knowledge among ourselves that we consider him to be choosing to be a cult leader.
+
+Meanwhile, my email thread with Scott got started back up again, although I wasn't expecting anything to come out of it. I expressed some regret that all the times I had emailed him over the past couple years had been when I was upset about something (like psych hospitals, or—something else) and wanted something from him, which was bad, because it was treating him as a means rather than an end—and then, despite that regret, continued prosecuting the argument.
+
+One of Alexander's [most popular _Less Wrong_ posts ever had been about the noncentral fallacy, which Alexander called "the worst argument in the world"](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world): for example, those who crow that abortion is _murder_ (because murder is the killing of a human being), or that Martin Luther King, Jr. was a _criminal_ (because he defied the segregation laws of the South), are engaging in a dishonest rhetorical maneuver in which they're trying to trick their audience into attributing attributes of the typical "murder" or "criminal" onto what are very noncentral members of those categories.
+
+_Even if_ you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Janie a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations—about Janie's moral character, about the suffering of victim whose hopes and dreams were cut short, about Janie's relationship with the law, _&c._—most of which get violated when you subsequently reveal that the murder victim was a fetus.
+
+Thus, we see that Alexander's own "The Worst Argument in the World" is really complaining about the _same_ category-gerrymandering move that his "... Not Man for the Categories" comes out in favor of. We would not let someone get away with declaring, "I ought to accept an unexpected abortion or two deep inside the conceptual boundaries of what would normally not be considered murder if it'll save someone's life."
+
+... Scott still didn't get it. He said that he didn't see why he shouldn't accept one unit of categorizational awkwardness in exchange for sufficiently large utilitarian benefits. I started drafting a long reply—but then I remembered that in recent discussion with my posse about what we might have done wrong in our attempted outreach to Yudkowsky, the idea had come up that in-person meetings are better for updateful disagreement-resolution. Would Scott be up for meeting in person some weekend? Non-urgent. Ben would be willing to moderate, unless Scott wanted to suggest someone else, or no moderator.
+
+... Scott didn't want to meet. At this point, I considered resorting to the tool of cheerful prices again, which I hadn't yet used against Scott—to say, "That's totally understandable! Would a financial incentive change your decision? For a two-hour meeting, I'd be happy to pay up to $4000 to you or your preferred charity. If you don't want the money, then sure, yes, let's table this. I hope you're having a good day." But that seemed sufficiently psychologically coercive and socially weird that I wasn't sure I wanted to go there. I emailed my posse asking what they thought—and then added that maybe they shouldn't reply until Friday, because it was Monday, and I really needed to focus on my dayjob that week.
+
+This is the part where I began to ... overheat. I tried ("tried") to focus on my dayjob, but I was just _so angry_. Did Scott _really_ not understand the rationality-relevant distinction between "value-dependent categories as a result of only running your clustering algorithm on the subspace of the configuration space spanned by the variables that are relevant to your decisions" (as explained by the _dagim_/water-dwellers _vs._ fish example) and "value-dependent categories _in order to not make my friends sad_"? I thought I was pretty explicit about this? Was Scott _really_ that dumb?? Or is it that he was only verbal-smart and this is the sort of thing that only makes sense if you've ever been good at linear algebra?? Did I need to write a post explaining just that one point in mathematical detail? (With executable code and a worked example with entropy calculations.)
+
+My dayjob boss made it clear that he was expecting me to have code for my current Jira tickets by noon the next day, so I resigned myself to stay at the office late to finish that.
+
+But I was just in so much (psychological) pain. Or at least—as I noted in one of a series of emails to my posse that night—I felt motivated to type the sentence, "I'm in so much (psychological) pain." I'm never sure how to intepret my own self-reports, because even when I'm really emotionally trashed (crying, shaking, randomly yelling, _&c_.), I think I'm still noticeably _incentivizable_: if someone were to present a credible threat (like slapping me and telling me to snap out of it), then I would be able to calm down: there's some sort of game-theory algorithm in the brain that subjectively feels genuine distress (like crying or sending people too many hysterical emails) but only when it can predict that it will be either rewarded with sympathy or at least tolerated. (Kevin Simler: [tears are a discount on friendship](https://meltingasphalt.com/tears/).)
+
+I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) to summarize how I felt (the mention of @ESYudkowsky being to attribute credit; I figured Yudkowsky had enough followers that he probably wouldn't see a notification):
+
+> "—and if you still have something to protect, so that you MUST keep going, and CANNOT resign and wisely acknowledge the limitations of rationality— [1/3]
+>
+> "—then you will be ready to start your journey[.] To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught. [2/3]
+>
+> "No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand." —@ESYudkowsky (https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science) [end/3]
+
+Only it wasn't quite appropriate. The quote is about failure resulting in the need to invent new methods of rationality, better than the ones you were taught. But ... the methods I had been taught were great! I don't have a pressing need to improve on them! I just couldn't cope with everyone else having _forgotten!_
+
+I did, eventually, get some dayjob work done that night, but I didn't finish the whole thing my manager wanted done by the next day, and at 4 _a.m._, I concluded that I needed sleep, the lack of which had historically been very dangerous for me (being the trigger for my [2013](http://zackmdavis.net/blog/2013/04/prodrome/) and [2017](/2017/Mar/fresh-princess/) psychotic breaks and subsequent psych imprisonments). We didn't want another bad outcome like that; we really didn't. There was a couch in the office, and probably another four hours until my coworkers started to arrive. The thing I needed to do was just lie down on the couch in the dark and have faith that sleep will come. Meeting my manager's deadline wasn't _that_ important. When people come in to the office, I might ask for help getting an Uber home? Or help buying melatonin? The important thing was to be calm.
+
+I sent an email explaining this to Scott and my posse and two other friends (Subject: "predictably bad ideas").
+
+Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist that it's OK to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is OK ... why doesn't that just generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed much crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then, why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for my sanity! Did that matter at all? Also, Scott had asked me if it wouldn't be embarrassing, if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff? But the _original reason_ it had ever seemed _remotely_ plausible that we would create Utopia forever wasn't "because we're us, the self-designated world-saving good guys", but because we were going to perfect an art of _systematically correct reasoning_. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the _reason_ that it was plausible that we would create Utopia forever; you can't just forfeit the mandate of Heaven like that and still expect to still rule China. Also, Scott had proposed a super-Outside View of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes in people, and suggested that I think of _that_ was what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harrass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had the culture-war PTSD that he propsed, _and_ that my concerns have merit.
+
+[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits with her son—I mean, her son at the time]
+
+(Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.)
+
+I think at some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But then when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. (Michael seemed to have a theory that people will only change their bad behavior when they see a victim who is being harmed.)
+
+I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me was just crazy.)
+
+Was the answer just that I needed to accept that there wasn't such a thing in the world as a "rationalist community"? (Sarah had told me as much two years ago, at BABSCon, and I just hadn't made the corresponing mental adjustments.)
+
+On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could just expect people to have read. I could spontaneously use the phrase "Absolute Denial Macro" in conversation and expect to be understood. That's a massive "home field advantage." If I just gave up on "rationalists" being a thing, and go out in the world to make intellectual friends elsewhere (by making friends with _Quillette_ readers or arbitrary University of Chicago graduates), then I would lose all that accumulated capital.
+
+The language I spoke was _mostly_ educated American English, but I relied on subculture dialect for a lot. My sister has a chemistry doctorate from MIT (and so speaks the language of STEM intellectuals generally), and when I showed her ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), she reported finding it somewhat hard to read, likely because I casually use phrases like "thus, an excellent [motte](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/)", and expect to be understood without the reader taking 10 minutes to read the link. That essay, which was me writing from the heart in the words that came most naturally to me, could not be published in _Quillette_. The links and phraseology were just too context-bound.
+
+Maybe that's why I felt like I had to stand my ground and fight a culture war to preserve the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions.
+
+[TODO SECTION: proton concession
+ * as it happened, the next day, Wednesday, we got this: https://twitter.com/ESYudkowsky/status/1108277090577600512 (Why now? maybe he saw the "tools have shattered in their hand"; maybe the Quillette article just happened to be timely)
+ * A concession! In the war frame, you'd think this would make me happy
+ * "I did you a favor by Tweeting something obliquely favorable to your object-level crusade, and you repay me by criticizing me? How dare you?!" My model of Sequences-era Eliezer-2009 would never do that, because the species-typical arguments-as-social-exchange
+ * do you think Eliezer is thinking, "Fine, if I tweet something obliquely favorable towards Zack's object-level agenda, maybe Michael's gang will leave me alone now"
+ * If there's some other reason you suspect there might by multiple species of dysphoria, but you tell people your suspicion is because dysphoria has more than one proton, then you're still kind of misinforming them for political reasons, which is the generalized problem that we're worried about?
+ * Michael's take: not worth the digression; we need to confront the actual crisis
+ * We need to figure out how to win against bad faith arguments
+]
+
+[TODO: Jessica joins the coalition; she tell me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards); Michael said that me and Jess together have more moral authority]
+
+[TODO: wrapping up with Scott; Kelsey; high and low Church https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
+
+[TODO: Ben reiterated that the most important thing was explaining why I've written them off; self-promotion imposes a cost on others; Jessica on creating clarity; Michael on less precise is more violent]
+
+[TODO: after some bouncing off the posse, what was originally an email draft became a public _Less Wrong_ post, "Where to Draw the Boundaries?" (note, plural)
+ * Wasn't the math overkill?
+ * math is important for appeal to principle—and as intimidation https://slatestarcodex.com/2014/08/10/getting-eulered/
+ * four simulacra levels got kicked off here
+ * no politics! just philosophy!
+ * Ben on Michael on whether we are doing politics; "friendship, supplication, and economics"
+ * I could see that I'm including subtext and expecting people to only engage with the text, but if we're not going to get into full-on gender-politics on Less Wrong, but gender politics is motivating an epistemology error, I'm not sure what else I'm supposed to do! I'm pretty constrained here!
+ * I had already poisoned the well with "Blegg Mode" the other month, bad decision
+ * We lost?! How could we lose??!!?!?
+]
+
+------
+
+
+[TODO: I was floored; math and wellness month
+ Anna doesn't want money from me
+ scuffle on "Yes Requires the Possibility of No"
+ LessWrong FAQ https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk ]