+<p class="flower-break">⁕ ⁕ ⁕</p>
+
+If you were Alice, and a _solid supermajority_ of your incredibly smart, incredibly philosophically sophisticated friend group _including Eliezer Yudkowsky_ (!!!) seemed to behave like Bob (and reaped microhedonic social rewards for it in the form of, _e.g._, hundreds of Twitter likes), that would be a _pretty worrying_ sign about your friends' ability to accomplish intellectually hard things like AI alignment, right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the _problem_ is that Bob is [_selectively_](http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) using his sophisticated philosophy-of-language knowledge to try to _undermine Alice's ability to use language to make sense of the world_, even though Bob _obviously knows goddamned well what Alice was trying to say_; it's incredibly obfuscatory in a way that people—the _same_ people—would not tolerate in almost _any_ other context.
+
+Imagine an Islamic theocracy in which one Meghan Murphee had recently gotten kicked off the dominant microblogging platform for speaking disrespectfully about the prophet Muhammad. Suppose that [Yudkowsky's analogue in that world](/2020/Aug/yarvin-on-less-wrong/) then posted that Murphee's supporters were ontologically confused to object on free inquiry grounds: [saying "peace be upon him" after the name of the prophet Muhammad](https://en.wikipedia.org/wiki/Islamic_honorifics#Applied_to_Muhammad_and_his_family) is a _speech act_, not a statement of fact. In banning Murphee for repeatedly speaking about the prophet Muhammad (peace be upon him) as if he were just some guy, the platform was merely ["enforcing a courtesy standard"](https://twitter.com/ESYudkowsky/status/1067302082481274880) (in the words of our world's Yudkowsky); Murphee wasn't being forced to _lie_.
+
+I think the atheists of our world, including Yudkowsky, would not have any trouble seeing the problem with this scenario, nor hesitate to agree that it _is_ a problem for that Society's rationality. It is, of course, true as an isolated linguistics fact that saying "peace be unto him" is a speech act rather than a statement of fact, but it would be _bizarre_ to condescendingly point this out _as if it were the crux of debates about religious speech codes_. The _function_ of the speech act is to signal the speaker's affirmation of Muhammad's divinity. That's _why_ the Islamic theocrats want to mandate that everyone says it: it's a lot harder for atheism to get any traction if no one is allowed to _talk_ like an atheist.
+
+And that's exactly why trans advocates want to mandate against misgendering people on social media: it's harder for trans-exclusionary ideologies to get any traction if no one is allowed to _talk_ like someone who believes that sex (sometimes) matters and gender identity does not.
+
+Of course, such speech restrictions aren't necessarily "irrational", depending on your goals! If you just don't think "free speech" should go that far—if you _want_ to suppress atheism or gender-critical feminism with an iron fist—speech codes are a perfectly fine way to do it! And _to their credit_, I think most theocrats and trans advocates are intellectually honest about the fact that this is what they're doing: atheists or transphobes are _bad people_ (the argument goes) and we want to make it harder for them to spread their lies or their hate.
+
+In contrast, by claiming to be "not taking a stand for or against any Twitter policies" while accusing people who oppose the policy of being ontologically confused, Yudkowsky was being less honest than the theocrat or the activist: of _course_ the point of speech codes is suppress ideas! Given that the distinction between facts and policies is so obviously _not anyone's crux_—the smarter people in the "anti-trans" faction already know that, and the dumber people in the faction wouldn't change their alignment if they were taught—it's hard to see what the _point_ of harping on the fact/policy distiction would be, _except_ to be seen as implicitly taking a stand for the "pro-trans" faction, while [putting on a show of being politically "neutral."](https://www.lesswrong.com/posts/jeyvzALDbjdjjv5RW/pretending-to-be-wise)
+
+It makes sense that Yudkowsky might perceive political constraints on what he might want to say in public—especially when you look at [what happened to the _other_ Harry Potter author](https://en.wikipedia.org/wiki/Political_views_of_J._K._Rowling#Transgender_rights). (Despite my misgivings, this blog _was_ still published under a pseudonym at the time; it would have been hypocritical of me to accuse someone of cowardice about what they're willing to attach their real name to.)
+
+But if Yudkowsky didn't want to get into a distracting fight about a politically-charged topic, then maybe the responsible thing to do would have been to just not say anything about the topic, rather than engaging with the _stupid_ version of the opposition and [stonewalling](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters) with "That's a policy question" when people tried to point out the problem?!
+
+------
+
+... I didn't have all of that criticism collected and carefully written up on 28 November 2018. But that, basically, is why I _flipped out_ when I saw that Twitter thread. If the "rationalists" didn't [click](https://www.lesswrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click) on the autogynephilia thing, that was disappointing, but forgivable. If the "rationalists", on Scott Alexander's authority, were furthermore going to get our own philosophy of language wrong over this, that was—I don't want to say _forgivable_ exactly, but it was—tolerable. I had learned from my misadventures the previous year that I had been wrong to trust "the community" as a reified collective and put it on a pedastal—that had never been a reasonable mental stance in the first place.
+
+But trusting Eliezer Yudkowsky—whose writings, more than any other single influence, had made me who I am—_did_ seem reasonable. If I put him on a pedastal, it was because he had earned the pedastal, for supplying me with my criteria for how to think—including, as a trivial special case, [how to think about what things to put on pedastals](https://www.lesswrong.com/posts/YC3ArwKM8xhNjYqQK/on-things-that-are-awesome).
+
+So if the rationalists were going to get our own philosophy of language wrong over this _and Eliezer Yudkowsky was in on it_ (!!!), that was intolerable, inexplicable, incomprehensible—like there _wasn't a real world anymore_.
+
+At the dayjob retreat, I remember going downstairs to impulsively confide in a senior engineer, an older bald guy who exuded masculinity, who you could tell by his entire manner and being was not infected by the Berkeley mind-virus, no matter how loyally he voted Democrat. I briefly explained the situation to him—not just about the immediate impetus of this Twitter thread, but this whole _thing_ of the past couple years where my entire social circle just suddenly decided that guys like me could be women by means of saying so. He was noncommittally sympathetic; he told me an anecdote about him accepting a trans person's correction of his pronoun usage, with the thought that different people have their own beliefs, and that's OK.
+
+If Yudkowsky was _already_ stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (Also, I hadn't intended to talk about gender on that account yet, although that seemed relatively unimportant in light of the present cause for flipping out.)
+
+It seemed better to try to clear this up in private. I still had Yudkowsky's email address, last used when [I had offered to pay to talk about his theory of MtF two years before](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price). I felt bad bidding for his attention over my gender thing _again_—but I had to do _something_. Hands trembling, I sent him an email asking him to read my ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), suggesting that it may qualify as an answer to [his question about "a page [he] could read to find a non-confused exclamation of how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232)—and that, because I cared very much about correcting what I claimed were confusions in my rationalist subculture, that I would be happy to pay up to $1000 for his time—and that, if he liked the post, he might consider Tweeting a link—and that I was cc'ing my friends Anna Salamon and Michael Vassar as a character reference (Subject: "another offer, $1000 to read a ~6500 word blog post about (was: Re: Happy Price offer for a 2 hour conversation)"). Then I texted Anna and Michael begging them to chime in and vouch for my credibility.
+
+The monetary offer, admittedly, was awkward: I included another paragraph clarifying that any payment was only to get his attention, and not _quid quo pro_ advertising, and that if he [didn't trust his brain circuitry](https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans) not to be corrupted by money, then he might want to reject the offer on those grounds and only read the post if he expected it to be genuinely interesting.
+
+Again, I realize this must seem weird and cultish to any normal people reading this. (Paying some blogger you follow one grand just to _read_ one of your posts? What? Why? Who _does_ that?) To this, I again refer to [the reasons justifying my 2016 cheerful price offer](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price-reasons)—and that, along with tagging in Anna and Michael, who I thought Yudkowsky respected, it was a way to signal that I _really really really didn't want to be ignored_, which I assumed was the default outcome. An ordinary programmer such as me was as a mere _worm_ in the presence of the great Eliezer Yudkowsky. I wouldn't have had the audacity to contact him at _all_, about _anything_, if I didn't have [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect).
+
+Anna didn't reply, but I apparently did interest Michael, who chimed in on the email thread to Yudkowsky. We had a long phone conversation the next day lamenting how the "rationalists" were dead as an intellectual community.
+
+As for the attempt to intervene on Yudkowsky—here I need to make a digression about the constraints I'm facing in telling this Whole Dumb Story. _I_ would prefer to just tell this Whole Dumb Story as I would to my long-neglected Diary—trying my best at the difficult task of explaining _what actually happened_ during a very important part of my life, without thought of concealing anything.
+
+(If you are silent about your pain, _they'll kill you and say you enjoyed it_.)
+
+Unfortunately, a lot of _other people_ seem to have strong intuitions about "privacy", which bizarrely impose constraints on what _I'm_ allowed to say about my own life: in particular, it's considered unacceptable to publicly quote or summarize someone's emails from a conversation that they had reason to expect to be private. I feel obligated to comply with these widely-held privacy norms, even if _I_ think they're paranoid and [anti-social](http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy/). (This secrecy-hating trait probably correlates with the autogynephilia blogging; someone otherwise like me who believed in privacy wouldn't be telling you this Whole Dumb Story.)
+
+So I would _think_ that the commonsense privacy-norm-compliance rule I should hold myself to while telling this Whole Dumb Story is that I obviously have an inalienable right to blog about _my own_ actions, but that I'm not allowed to directly refer to private conversations with named individuals in cases where I don't think I'd be able to get the consent of the other party. (I don't think I'm required to go through the ritual of asking for consent in cases where the revealed information couldn't reasonably be considered "sensitive", or if I know the person doesn't have hangups about this weird "privacy" thing.) In this case, I'm allowed to talk about _me_ emailing Yudkowsky (because that was _my_ action), but I'm not allowed to talk about anything he might have said in reply, or whether he replied.
+
+Unfortunately, there's a potentially serious loophole in the commonsense rule: what if some of my actions (which I would have _hoped_ to have an inalienable right to blog about) _depend on_ content from private conversations? You can't, in general, only reveal one side of a conversation.
+
+Supppose Alice messages Bob at 5 _p.m._, "Can you come to the party?", and also, separately, that Alice messages Bob at 6 _p.m._, "Gout isn't contagious." Should Alice be allowed to blog about the messages she sent at 5 _p.m._ and 6 _p.m._, because she's only describing her own messages, and not confirming or denying whether Bob replied at all, let alone quoting him?
+
+I think commonsense privacy-norm-adherence intuitions actually say _No_ here: the text of Alice's messages makes it too easy to guess that sometime between 5 and 6, Bob probably said that he couldn't come to the party because he has gout. It would seem that Alice's right to talk about her own actions in her own life _does_ need to take into account some commonsense judgement of whether that leaks "sensitive" information about Bob.
+
+In the substory (of my Whole Dumb Story) that follows, I'm going to describe several times when I and others emailed Yudkowsky to try to argue with what he said in public, without saying anything about whether Yudkowsky replied, or what he might have said if he did reply. I maintain that I'm within my rights here, because I think commonsense judgement will agree that me talking about the arguments _I_ made, does not in this case leak any sensitive information about the other side of a conversation that may or may not have happened: I think the story comes off relevantly the same whether Yudkowsky didn't reply at all (_e.g._, because he was too busy with more existentially important things to check his email), or whether he replied in a way that I found sufficiently unsatisfying as to occasion the futher emails with followup arguments that I describe; I don't think I'm leaking any sensitive bits that aren't already easy to infer from what's been said (and not said) in public. (Talking about later emails _does_ rule out the possible world where Yudkowsky had said, "Please stop emailing me," because I would have respected that, but the fact that he didn't say that isn't "sensitive".)
+
+It seems particularly important to lay out these judgements about privacy norms in connection to my attempts to contact Yudkowsky, because part of what I'm trying to accomplish in telling this Whole Dumb Story is to deal reputational damage to Yudkowsky, which I claim is deserved. (We want reputations to track reality. If you see Carol exhibiting a pattern of intellectual dishonesty, and she keeps doing it even after you try talking to her about it privately, you might want to write a blog post describing the pattern in detail—not to _hurt_ Carol, particularly, but so that everyone _else_ can make higher-quality decisions about whether they should believe the things that Carol says.) Given that motivation of mine, it seems important that I only try to hang Yudkowsky with the rope of what he said in public, where you can click the links and read the context for yourself. In the substory that follows, I _also_ describe some of my correspondence with Scott Alexander, but that doesn't seem sensitive in the same way, because I'm not particularly trying to deal reputational damage to Alexander in the same way. (Not because Scott performed well, but because one wouldn't really have _expected_ Scott to perform well in this situation; Alexander's reputation isn't so direly in need of correction.)
+
+In accordance with the privacy-norm-adherence policy just described, I don't think I should say whether Yudkowsky replied to Michael's and my emails, nor (again) whether he accepted the cheerful price money, because any conversation that may or may not have occured would have been private. But what I _can_ say, because it was public, is that we saw [this addition to the Twitter thread](https://twitter.com/ESYudkowsky/status/1068071036732694529):
+
+> I was sent this (by a third party) as a possible example of the sort of argument I was looking to read: [http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/). Without yet judging its empirical content, I agree that it is not ontologically confused. It's not going "But this is a MAN so using 'she' is LYING."
+
+Look at that! The great Eliezer Yudkowsky said that my position is "not ontologically confused." That's _probably_ high praise coming from him!
+
+You might think that that should have been the end of the story. Yudkowsky denounced a particular philosophical confusion, I already had a related objection written up, and he publicly acknowledged my objection as not being the confusion he was trying to police. I _should_ be satisfied, right?
+
+I wasn't, in fact, satisfied. This little "not ontologically confused" clarification buried deep in the replies was _much less visible_ than the bombastic, arrogant top level pronouncement insinuating that resistance to gender-identity claims _was_ confused. (1 Like on this reply, _vs._ 140 Likes/21 Retweets on start of thread.) I expected that the typical reader who had gotten the impression from the initial thread that Yudkowsky thought that gender-identity skeptics didn't have a leg to stand on, would not, actually, be disabused of this impression by the existence of this little follow-up. Was it greedy of me to want something _louder_?
+
+Greedy or not, I wasn't done flipping out. On 1 December 2019, I wrote to Scott Alexander (cc'ing a few other people), asking if there was any chance of an _explicit_ and _loud_ clarification or partial-retraction of ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (Subject: "super-presumptuous mail about categorization and the influence graph"). _Forget_ my boring whining about the autogynephilia/two-types thing, I said—that's a complicated empirical claim, and _not_ the key issue.
+
+The _issue_ was that category boundaries are not arbitrary (if you care about intelligence being useful): you want to [draw your category boundaries such that](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) things in the same category are similar in the respects that you care about predicting/controlling, and you want to spend your [information-theoretically limited budget](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) of short words on the simplest and most wide-rangingly useful categories.
+
+It was true that [the reason _I_ was continuing to freak out about this](/2019/Jul/the-source-of-our-power/) to the extent of sending him this obnoxious email telling him what to write (seriously, who does that?!) had to with transgender stuff, but wasn't the reason _Scott_ should care.
+
+The other year, Alexander had written a post, ["Kolmogorov Complicity and the Parable of Lightning"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/), explaining the consequences of political censorship by means of an allegory about a Society with the dogma that thunder occurs before lightning.[^kolmogorov-pun] Alexander had explained that the problem with complying with the dictates of a false orthodoxy wasn't so much the sacred dogma itself (it's not often that you need to _directly_ make use of the fact that lightning comes first), but that [the need to _defend_ the sacred dogma](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) [_destroys everyone's ability to think_](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology).
+
+[^kolmogorov-pun]: The title was a [pun](https://en.wikipedia.org/wiki/Kolmogorov_complexity) referencing computer scientist Scott Aaronson's post advocating ["The Kolmogorov Option"](https://www.scottaaronson.com/blog/?p=3376), serving the cause of Truth by cultivating a bubble that focuses on specific truths that won't get you in trouble with the local political authorities. This after the Soviet mathematician Andrey Kolmogorov, who _knew better than to pick fights he couldn't win_.
+
+It was the same thing here. It wasn't that I had any direct practical need to misgender anyone in particular. It still wasn't okay that trying to talk about the reality of biological sex to so-called "rationalists" got you an endless deluge of—polite! charitable! non-ostracism-threatening!—_bullshit nitpicking_. (What about [complete androgen insensitivity syndrome](https://en.wikipedia.org/wiki/Complete_androgen_insensitivity_syndrome)? Why doesn't this ludicrous misinterpretation of what you said [imply that lesbians aren't women](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/)? _&c. ad infinitum_.) With enough time, I thought the nitpicks could and should be satisfactorily answered. (Any ones that couldn't would presumably be fatal criticisms rather than bullshit nitpicks.) But while I was in the process of continuing to write all that up, I hoped Alexander could see why I felt somewhat gaslighted.
+
+(I had been told by others that I wasn't using the word "gaslighting" correctly. _Somehow_ no one seemed to think I had the right to define _that_ category boundary for my convenience.)
+
+If our vaunted rationality techniques resulted in me having to spend dozens of hours patiently explaining why I didn't think that I was a woman and that [the person in this photograph](https://daniellemuscato.startlogic.com/uploads/3/4/9/3/34938114/2249042_orig.jpg) wasn't a woman, either (where "isn't a woman" is a _convenient rhetorical shorthand_ for a much longer statement about [naïve Bayes models](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) and [high-dimensional configuration spaces](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) and [defensible Schelling points for social norms](https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes)), then our techniques were _worse than useless_.
+
+[If Galileo ever muttered "And yet it moves"](https://en.wikipedia.org/wiki/And_yet_it_moves), there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense, or some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in November 2014, but in retrospect, _maybe_ it was a _bad_ idea to build a [memetic superweapon](https://archive.is/VEeqX) that says that the number of epicycles _doesn't matter_.
+
+And the reason to write this as a desperate email plea to Scott Alexander when I could be working on my own blog, was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arose, what actually happened was that people like Alexander and Yudkowsky rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbed some of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who didn't [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but thought they saw a flaw in the _Zeitgeist_, the winning move was "persuade Scott Alexander."
+
+Back in 2010, the rationalist community had a shared understanding that the function of language is to describe reality. Now, we didn't. If Scott didn't want to cite my creepy blog about my creepy fetish, that was _totally fine_; I liked getting credit, but the important thing was that this "No, the Emperor isn't naked—oh, well, we're not claiming that he's wearing any garments—it would be pretty weird if we were claiming _that!_—it's just that utilitarianism implies that the _social_ property of clothedness should be defined this way because to do otherwise would be really mean to people who don't have anything to wear" gaslighting maneuver needed to _die_, and he alone could kill it.
+
+... Scott didn't get it. We agreed that self-identity-, natal-sex-, and passing-based gender categories each had their own pros and cons, and that it's uninteresting to focus on whether something "really" belongs to a category, rather than on communicating what you mean. Scott took this to mean that what convention to use is a pragmatic choice that we can make on utilitarian grounds, and that being nice to trans people was worth a little bit of clunkiness, that the mental health benefits to trans people were obviously enough to tip the first-order uilitarian calculus.
+
+I didn't think _anything_ about "mental health benefits to trans people" was obvious, but more importantly, I considered myself to be prosecuting _not_ the object-level question of which gender categories to use, but the meta-level question of what normative principles govern the use of categories, for which (I claimed) "whatever, it's a pragmatic choice, just be nice" wasn't an answer, because (I claimed) the normative the principles exclude "just be nice" from being a relevant consideration.
+
+["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) had concluded with a section on [Emperor Norton](https://en.wikipedia.org/wiki/Emperor_Norton), a 19th century San Francisco resident who declared himself Emperor of the United States. Certainly, it's not difficult or costly for the citizens of San Francisco to _address_ Norton as "Your Majesty" as a courtesy or a nickname. But there's more to being Emperor of the United States than people calling you "Your Majesty." Unless we abolish Congress and have the military enforce Norton's decrees, he's not _actually_ functioning in the role of emperor—at least not according to the currently generally-understood meaning of the word "emperor."
+
+What are you going to do if Norton takes you literally? Suppose he says, "I ordered the Imperial Army to invade Canada last week; where are the troop reports? And why do the newspapers keep talking about this so-called 'President' Rutherford B. Hayes? Have this pretender Hayes executed at once and bring his head to me!"
+
+You're not really going to bring him Rutherford B. Hayes's head. So what are you going to tell him? "Oh, well, you're not a _cis_ emperor who can command executions. But don't worry! Trans emperors are emperors"?
+
+To be sure, words can be used in many ways depending on context, but insofar as Norton _is_ interpreting "emperor" in the traditional sense, and you keep calling him your emperor without caveats or disclaimers, _you are lying to him_.
+
+... Scott still didn't get it. But I _did_ soon end up in more conversation with Michael Vassar, Ben Hoffman, and Sarah Constantin, who were game to help me with reaching out to Yudkowsky again to explain the problem in more detail—and to appeal to the conscience of someone who built their career on [higher standards](https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger).
+
+Yudkowsky probably didn't think much of _Atlas Shrugged_ (judging by [an offhand remark by our protagonist in _Harry Potter and the Methods_](http://www.hpmor.com/chapter/20)), but I kept thinking of the scene[^atlas-shrugged-ref] where our heroine Dagny Taggart entreats the great Dr. Robert Stadler to denounce [an egregiously deceptive but technically-not-lying statement](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly) by the State Science Institute, whose legitimacy derives from its association with his name. Stadler has become cynical in his old age and demurs, disclaiming all responsibility: "I can't help what people think—if they think at all!" ... "How can one deal in truth when one deals with the public?"
+
+[^atlas-shrugged-ref]: In Part One, Chapter VII, "The Exploiters and the Exploited".
+
+At this point, I still trusted Yudkowsky to do better than an Ayn Rand villain; I had faith that [_Eliezer Yudkowsky_](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) could deal in truth when he deals with the public.
+
+(I was wrong.)
+
+If we had this entire posse, I felt bad and guilty and ashamed about focusing too much on my special interest except insofar as it was geniunely a proxy for "Has Eliezer and/or everyone else [lost the plot](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/), and if so, how do we get it back?" But the group seemed to agree that my philosophy-of-language grievance was a useful test case for prosecuting deeper maladies affecting our subculture.
+
+There were times during these weeks where it felt like my mind shut down with the only thought, "What am I _doing_? This is _absurd_. Why am I running around picking fights about the philosophy of language—and worse, with me arguing for the _Bad_ Guys' position? Maybe I'm wrong and should stop making a fool out of myself. After all, using [Aumann-like](https://www.lesswrong.com/tag/aumann-s-agreement-theorem) reasoning, in a dispute of 'me and Michael Vassar vs. _everyone else_', wouldn't I want to bet on 'everyone else'? Obviously."
+
+Except ... I had been raised back in the 'aughts to believe that you're you're supposed to concede arguments on the basis of encountering a superior counterargument that makes you change your mind, and I couldn't actually point to one. "Maybe I'm making a fool out of myself by picking fights with all these high-status people" is _not a counterargument_.
+
+Anna continued to be disinclined to take a side in the brewing Category War, and it was beginning to put a strain on our friendship, to the extent that I kept ending up crying during our occasional meetings. She said that my "You have to pass my philosophy-of-language litmus test or I lose all respect for you as a rationalist" attitude was psychologically coercive. I agreed—I was even willing to go up to "violent"—in the sense that I'd cop to [trying to apply social incentives towards an outcome rather than merely exchanging information](http://zackmdavis.net/blog/2017/03/an-intuition-on-the-bayes-structural-justification-for-free-speech-norms/). But sometimes you need to use violence in defense of self or property, even if violence is generally bad. If we thought of the "rationalist" brand name as intellectual property, maybe it was property worth defending, and if so, then "I can define a word any way I want" wasn't an obviously terrible time to start shooting at the bandits?
+
+My _hope_ was that it was possible to apply just enough "What kind of rationalist are _you_?!" social pressure to cancel out the "You don't want to be a Bad ([Red](https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/)) person, do you??" social pressure and thereby let people look at the arguments—though I wasn't sure if that actually works, and I was growing exhausted from all the social aggression I was doing about it. (If someone tries to take your property and you shoot at them, you could be said to be the "aggressor" in the sense that you fired the first shot, even if you hope that the courts will uphold your property claim later.)
+
+After some more discussion within the me/Michael/Ben/Sarah posse, on 4 January 2019, I wrote to Yudkowsky again (a second time), to explain the specific problems with his "hill of meaning in defense of validity" Twitter performance, since that apparently hadn't been obvious from the earlier link to ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/); I cc'ed the posse, who chimed in afterwards.
+
+Ben explained what kind of actions we were hoping for from Yudkowsky: that he would (1) notice that he'd accidentally been participating in an epistemic war, (2) generalize the insight (if he hadn't noticed, what were the odds that MIRI had adequate defenses?), and (3) join the conversation about how to _actually_ have a rationality community, while noticing this particular way in which the problem seemed harder than it used to. For my case in particular, something that would help would be _either_ (A) a clear _ex cathedra_ statement that gender categories are not an exception to the general rule that categories are nonarbitrary, _or_ (B) a clear _ex cathedra_ statement that he's been silenced on this matter. If even (B) was too politically expensive, that seemed like important evidence about (1).
+
+Without revealing the other side of any private conversation that may or may not have occurred, I can say that we did not get either of those _ex cathedra_ statements from Yudkowsky at this time.
+
+It was also around this time that our posse picked up a new member, whom I'll call "Riley".
+
+-----
+
+On 5 January 2019, I met with Michael and his associate Aurora Quinn-Elmore in San Francisco to attempt mediated discourse with [Ziz](https://sinceriously.fyi/) and [Gwen](https://everythingtosaveit.how/), who were considering suing the [Center for Applied Rationality](https://rationality.org/) (CfAR)[^what-is-cfar] for discriminating against trans women. Michael hoped to dissuade them from a lawsuit—not because Michael approved of CfAR's behavior, but because lawyers make everything worse.
+
+[^what-is-cfar]: CfAR had been spun off from MIRI in 2012 as a dedicated organization for teaching rationality.
+
+Ziz recounted [her](/2019/Oct/self-identity-is-a-schelling-point/) story of how Anna Salamon (in her capacity as President of CfAR and community leader) allegedly engaged in [conceptual warfare](https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) to falsely portray Ziz as a predatory male. I was unimpressed: in my worldview, I didn't think Ziz had the right to say "I'm not a man," and expect people to just believe that. ([I remember that](https://twitter.com/zackmdavis/status/1081952880649596928) at one point, Ziz answered a question with, "Because I don't run off masochistic self-doubt like you." I replied, "That's fair.") But I did respect that Ziz actually believed in an intersex brain theory: in Ziz and Gwen's worldview, people's genders were a _fact_ of the matter, not just a manipulation of consensus categories to make people happy.
+
+Probably the most ultimately consequential part of this meeting on future events was Michael verbally confirming to Ziz that MIRI had settled with a disgruntled former employee, Louie Helm, who had put up [a website slandering them](https://archive.ph/Kvfus). (I don't actually know the details of the alleged settlement. I'm working off of [Ziz's notes](https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) rather than particularly remembering that part of the conversation clearly myself; I don't know what Michael knew.) What was significant was that if MIRI _had_ paid Helm as part of an agreement to get the slanderous website taken down, then, whatever the nonprofit best-practice books might have said about whether this was a wise thing to do when facing a dispute from a former employee, that would decision-theoretically amount to a blackmail payout, which seemed to contradict MIRI's advocacy of timeless decision theories (according to which you [shouldn't be the kind of agent that yields to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/)).
+
+----
+
+Something else Ben had said while chiming in on the second attempt to reach out to Yudkowsky hadn't sit quite right with me. He had written:
+
+> I am pretty worried that if I actually point out the ***physical injuries*** sustained by some of the smartest, clearest-thinking, and kindest people I know in the Rationalist community as a result of this sort of thing, I'll be dismissed as a mean person who wants to make other people feel bad.
+
+I didn't know what he was talking about. My trans widow friend "Rebecca"'s 2015 psychiatric imprisonment ("hospitalization") had probably been partially related to her husband's transition and had involved rough handling by the cops. I had been through some Bad Stuff, but none of it was "physical injuries." What were the other cases, if he could share without telling me Very Secret Secrets With Names?
+
+Ben said that, probabilistically, he expected that some fraction of the trans women he knew who had "voluntarily" had bottom surgery, had done so in response to social pressure, even if some of them might very well have sought it out in a less weaponized culture.
+
+I said that saying "I am worried that if I actually point out the physical injuries ..." when the actual example turned out to be sex reassignment surgery seemed pretty dishonest to me: I had thought he might have more examples of situations like mine or "Rebecca"'s, where gaslighting escalated into more tangible harm in a way that people wouldn't know about by default. In contrast, people _already know_ that bottom surgery is a thing; Ben just had reasons to think it's Actually Bad—reasons that his friends couldn't engage with if _we didn't know what he was talking about_. It was already bad enough that Yudkowsky was being so cagey; if _everyone_ did it, then we were really doomed.
+
+Ben said that he was more worried that saying politically-loaded things in the wrong order would reduce our chances of getting engagement from Yudkowsky, than he was about someone sharing his words out of context in a way that caused him distinct harm—and maybe more than both of those, that saying the wrong keywords would cause his correspondents to talk about _him_ using the wrong keywords, in ways that caused illegible, hard-to-trace damage.
+
+------
+
+There's a view that assumes that as long as everyone is being cordial, our truthseeking public discussion must be basically on-track: if no one overtly gets huffily offended and calls to burn the heretic, then the discussion isn't being warped by the fear of heresy.
+
+I do not hold this view. I think there's a _subtler_ failure mode where people know what the politically-favored [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) is, and collude to ignore, nitpick, or just be targetedly _uninterested_ in any fact or line of argument that doesn't fit the party line. I want to distinguish between direct ideological conformity enforcement attempts, and people not living up to their usual epistemic standards in response to ideological conformity enforcement in the general culture they're embedded in.
+
+Especially compared to normal Berkeley, I had to give the Berkeley "rationalists" credit for being _very good_ at free speech norms. (I'm not sure I would be saying this in the possible world where Scott Alexander didn't have a [traumatizing experience with social justice in college](https://slatestarcodex.com/2014/01/12/a-response-to-apophemi-on-triggers/), causing him to dump a ton of [anti-social-justice](https://slatestarcodex.com/tag/things-i-will-regret-writing/), [pro-argumentative-charity](https://slatestarcodex.com/2013/02/12/youre-probably-wondering-why-ive-called-you-here-today/) antibodies into the "rationalist" collective "water supply" after he became our subculture's premier writer. But it was true in _our_ world.) I didn't want to fall into the [bravery-debate](http://slatestarcodex.com/2013/05/18/against-bravery-debates/) trap of, "Look at me, I'm so heroically persecuted, therefore I'm right (therefore you should have sex with me)". I wasn't angry at the "rationalists" for being silenced or shouted down (which I wasn't); I was angry at them for _making bad arguments_ and systematically refusing to engage with the obvious counterarguments when they were made.
+
+As an illustrative example, in an argument on Discord in January 2019, I said, "I need the phrase 'actual women' in my expressive vocabulary to talk about the phenomenon where, if transition technology were to improve, then the people we call 'trans women' would want to make use of that technology; I need language that _asymmetrically_ distinguishes between the original thing that already exists without having to try, and the artificial thing that's trying to imitate it to the limits of available technology".
+
+Kelsey Piper replied, "[T]he people getting surgery to have bodies that do 'women' more the way they want are mostly cis women [...] I don't think 'people who'd get surgery to have the ideal female body' cuts anything at the joints."
+
+Another woman said, "'the original thing that already exists without having to try' sounds fake to me" (to the acclaim of 4 "+1" emoji reactions).
+
+The problem with this kind of exchange is not that anyone is being shouted down, nor that anyone is lying. The _problem_ is that people are motivatedly, ["algorithmically"](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) "playing dumb." I wish we had more standard terminology for this phenomenon, which is ubiquitous in human life. By "playing dumb", I don't mean that to suggest that Kelsey was _consciously_ thinking, "I'm playing dumb in order to gain an advantage in this argument." I don't doubt that, _subjectively_, mentioning that cis women also get cosmetic surgery sometimes _felt like_ a relevant reply (because I had mentioned transitioning interventions). It's just that, in context, I was very obviously trying to talk about the natural category of "biological sex", and Kelsey could have figured that out _if she had wanted to_.
+
+It's not that anyone explicitly said, "Biological sex isn't real" in those words. ([The elephant in the brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain) knew it wouldn't be able to get away with _that_.) But if everyone correlatedly plays dumb whenever someone tries to _talk_ about sex in clear language in a context where that could conceivably hurt some trans person's feelings, I think what you have is a culture of _de facto_ biological sex denialism. ("'The original thing that already exists without having to try' sounds fake to me"!!) It's not that hard to get people to admit that trans women are different from cis women, but somehow they can't (in public, using words) follow the implication that trans women are different from cis women _because_ trans women are male.
+
+Ben thought I was wrong to think of this kind of behavior as non-ostracisizing. The deluge of motivated nitpicking _is_ an implied marginalization threat, he explained: the game people were playing when they did that was to force me to choose between doing arbitarily large amounts of [interpretive labor](https://acesounderglass.com/2015/06/09/interpretive-labor/), or being cast as never having answered these construed-as-reasonable objections, and therefore over time losing standing to make the claim, being thought of as unreasonable, not getting invited to events, _&c._
+
+I saw the dynamic he was pointing at, but as a matter of personality, I was more inclined to respond, "Welp, I guess I need to write faster and more clearly", rather than to say, "You're dishonestly demanding arbitrarily large amounts of interpretive labor from me." I thought Ben was far too quick to give up on people whom he modeled as trying not to understand, whereas I continued to have faith in the possibility of _making_ them understand if I just ... never gave up. Not to be _so_ much of a scrub as to play chess with a pigeon (which craps on the board and then struts around like it's won), or wrestle with a pig (which gets you both dirty, and the pig likes it), or dispute [what the Tortoise said to Achilles](https://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles)—but to hold out hope that people in "the community" could only be _boundedly_ motivatedly dense, and anyway that giving up wouldn't make me a stronger writer.
+
+(Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of interpretive labor!")