+As such, we _shouldn't_ think that there are probably multiple kinds of gender dysphoria _because things are made of protons_ (?!?). If anything, _a priori_ reasoning about the cognitive function of categorization should actually cut in the other direction, (mildly) _against_ rather than in favor of multi-type theories: you only want to add more categories to your theory [if they can pay for their additional complexity with better predictions](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length). If you believe in Blanchard–Bailey–Lawrence's two-type taxonomy of MtF, or Littman's proposed rapid-onset type, it should be on the _empirical_ merits, not because multi-type theories are especially more likely to be true.
+
+Had Yudkowsky been thinking that maybe if he Tweeted something favorable to my agenda, then me and the rest of Michael's gang would be satisfied and leave him alone?
+
+But ... if there's some _other_ reason you suspect there might be multiple species of dysphoria, but you _tell_ people your suspicion is because dysphoria has more than one proton, you're still misinforming people for political reasons, which was the _general_ problem we were trying to alert Yudkowsky to. (Someone who trusted you as a source of wisdom about rationality might try to apply your _fake_ "everything more complicated than protons tends to come in varieties" rationality lesson in some other context, and get the wrong answer.) Inventing fake rationality lessons in response to political pressure is _not okay_, and the fact that in this case the political pressure happened to be coming from _me_, didn't make it okay.
+
+I asked the posse if this analysis was worth sending to Yudkowsky. Michael said it wasn't worth the digression. He asked if I was comfortable generalizing from Scott's behavior, and what others had said about fear of speaking openly, to assuming that something similar was going on with Eliezer? If so, then now that we had common knowledge, we needed to confront the actual crisis, which was that dread was tearing apart old friendships and causing fanatics to betray everything that they ever stood for while its existence was still being denied.
+
+Another thing that happened that week was that former MIRI researcher Jessica Taylor joined our posse (being at an in-person meeting with Ben and Sarah and another friend on the seventeenth, and getting tagged in subsequent emails). Significantly for political purposes, Jessica is trans. We didn't have to agree up front on all gender issues for her to see the epistemology problem with "... Not Man for the Categories", and to say that maintaining a narcissistic fantasy by controlling category boundaries wasn't what _she_ wanted, as a trans person. (On the seventeenth, when I lamented the state of a world that incentivized us to be political enemies, her response was, "Well, we could talk about it first.") Michael said that me and Jessica together had more moral authority than either of us alone.
+
+As it happened, I ran into Scott on the train that Friday, the twenty-second. He said that he wasn't sure why the oft-repeated moral of "A Human's Guide to Words" had been "You can't define a word any way you want" rather than "You _can_ define a word any way you want, but then you have to deal with the consequences."
+
+Ultimately, I think this was a pedagogy decision that Yudkowsky had gotten right back in 'aught-eight. If you write your summary slogan in relativist language, people predictably take that as license to believe whatever they want without having to defend it. Whereas if you write your summary slogan in objectivist language—so that people know they don't have social permission to say that "it's subjective so I can't be wrong"—then you have some hope of sparking useful thought about the _exact, precise_ ways that _specific, definite_ things are _in fact_ relative to other specific, definite things.
+
+I told Scott I would send him one more email with a piece of evidence about how other "rationalists" were thinking about the categories issue, and give my commentary on the parable about orcs, and then the present thread would probably drop there.
+
+On Discord in January, Kelsey Piper had told me that everyone else experienced their disagreement with me as being about where the joints are and which joints are important, where usability for humans was a legitimate criterion for importance, and it was annoying that I thought they didn't believe in carving reality at the joints at all and that categories should be whatever makes people happy.
+
+I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but ... Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey _thought_ she agreed with Scott, but actually didn't, that was kind of bad for our collective sanity, wasn't it?
+
+As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the _orcs themselves_. For one thing, how do you _know_ that serving evil-Melkior is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you _misleading information_ about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their _general_ ability to do arithmetic. We weren't not talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs _already know_ the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason, in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences, don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without ourselves _wanting to be lied to_. But behind the veil of ignorance, it's the same choice!
+
+I _also_ had more to say about philosophy of categories: I thought I could be more rigorous about the difference between "caring about predicting different variables" and "caring about consequences", in a way that Eliezer would _have_ to understand even if Scott didn't. (Scott had claimed that he could use gerrymandered categories and still be just as good at making predictions—but that's just not true if we're talking about the _internal_ use of categories as a [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms), rather than mere verbal behavior: it's always easy to _say_ "_X_ is a _Y_" for arbitrary _X_ and _Y_ if the stakes demand it, but if you're _actually_ using that concept of _Y_ internally, that does have effects on your world-model.)
+
+But after consultation with the posse, I concluded that further email prosecution was not useful at this time; the philosophy argument would work better as a public _Less Wrong_ post. So my revised Category War to-do list was:
+
+ * Send the brief wrapping-up/end-of-conversation email to Scott (with the Discord anecdote with Kelsey and commentary on the orc story).