+Back in 2010, the rationalist community had a shared understanding that the function of language is to describe reality. Now, we didn't. If Scott didn't want to cite my creepy blog about my creepy fetish, that was _totally fine_; I _liked_ getting credit, but the important thing is that this "No, the Emperor isn't naked—oh, well, we're not claiming that he's wearing any garments—it would be pretty weird if we were claiming _that!_—it's just that utilitarianism implies that the _social_ property of clothedness should be defined this way because to do otherwise would be really mean to people who don't have anything to wear" gaslighting maneuver needed to _die_, and he alone could kill it.
+
+... Scott didn't get it.
+
+But I _did_ end up in more conversation with Michael Vassar, Ben Hoffman, and Sarah Constantin, who were game to help me with reaching out to Yudkowsky again to explain the problem in more detail. If we had this entire posse, I felt bad and guilty and ashamed about focusing too much on my special interest except insofar as it was geniunely a proxy for "Has Eliezer and/or everyone else lost the plot, and if so, how do we get it back?" But the group seemed to agree that my philosophy-of-language grievance was a useful test case for prosecuting deeper maladies affecting our subculture.
+
+There were times during these weeks where it felt like my mind shut down with the only thought, "What am I _doing_? This is _absurd_. Why am I running around picking fights about the philosophy of language—and worse, with me arguing for the _Bad_ Guys' position? Maybe I'm wrong and should stop making a fool out of myself. After all, using Aumann-like reasoning, in a dispute of 'me and Michael Vassar vs. _everyone fucking else_', wouldn't I want to bet on 'everyone else'? Obviously."
+
+Except ... I had been raised back in the 'aughts to believe that you're you're supposed to concede arguments on the basis of encountering a superior counterargument that makes you change your mind, and I couldn't actually point to one. "Maybe I'm making a fool out of myself by picking fights with all these high-status people" is _not a counterargument_.
+
+Meanwhile, Anna continued to be disinclined to take a side in the brewing Category War, and it was beginning to put a strain on our friendship, to the extent that I kept ending up crying at some point during our occasional meetings. She told me that my "You have to pass my philosophy-of-language litmus test or I lose all respect for you as a rationalist" attitude was psychologically coercive. I agreed—I was even willing to go up to "violent"—in the sense that I'd cop to [trying to apply social incentives towards an outcome rather than merely exchanging information](http://zackmdavis.net/blog/2017/03/an-intuition-on-the-bayes-structural-justification-for-free-speech-norms/). But sometimes you need to use violence in defense of self or property, even if violence is generally bad. If we think of the "rationalist" label as intellectual property, maybe it's property worth defending, and if so, then "I can define a word any way I want" isn't obviously a terrible time to start shooting at the bandits?
+
+My _hope_ was that it was possible to apply just enough "What kind of rationalist are _you_?!" social pressure to cancel out the "You don't want to be a Bad (Red) person, do you??" social pressure and thereby let people look at the arguments—though I wasn't sure if that actually works, and I was growing exhausted from all the social aggression I was doing about it. (If someone tries to take your property and you shoot at them, you could be said to be the "aggressor" in the sense that you fired the first shot, even if you hope that the courts will uphold your property claim later.)
+
+There's a view that assumes that as long as everyone is being cordial, our truthseeking public discussion must be basically on-track: if no one overtly gets huffily offended and calls to burn the heretic, then the discussion isn't being warped by the fear of heresy.
+
+I do not hold this view. I think there's a _subtler_ failure mode where people know what the politically-favored bottom line is, and collude to ignore, nitpick, or just be targetedly _uninterested_ in any fact or line of argument that doesn't fit the party line. I want to distinguish between direct ideological conformity enforcement attempts, and "people not living up to their usual epistemic standards in response to ideological conformity enforcement in the general culture they're embedded in."
+
+Especially compared to normal Berkeley, I had to give the Berkeley "rationalists" credit for being _very good_ at free speech norms. (I'm not sure I would be saying this in the world where Scott Alexander didn't have a traumatizing experience with social justice in college, causing him to dump a ton of anti-social-justice, pro-argumentative-charity antibodies in the "rationalist" collective "water supply" after he became our subculture's premier writer. But it was true in _our_ world.) I didn't want to fall into the [bravery-debate](http://slatestarcodex.com/2013/05/18/against-bravery-debates/) trap of, "Look at me, I'm so heroically persecuted, therefore I'm right (therefore you should have sex with me)". I wasn't angry at the "rationalists" for being silenced or shouted down (which I wasn't); I was angry at them for _making bad arguments_ and systematically refusing to engage with the obvious counterarguments when they're made.
+
+Ben thought I was wrong to think of this as non-ostracisizing. The deluge of motivated nitpicking _is_ an implied marginalization threat, he explained: the game people are playing when they do that is to force me to choose between doing arbitarily large amounts of interpretive labor, or being cast as never having answered these construed-as-reasonable objections, and therefore over time losing standing to make the claim, being thought of as unreasonable, not getting invited to events, _&c._
+
+I saw the dynamic he was pointing at, but as a matter of personality, I was more inclined to respond, "Welp, I guess I need to write faster and more clearly", rather than to say "You're dishonestly demanding arbitrarily large amounts of interpretive labor from me." I thought Ben was far too quick to give up on people who he modeled as trying not to understand, whereas I continued to have faith in the possibility of _making_ them understand if I just never gave up. Even _if_ the other person was being motivatedly dense, giving up wouldn't make me a stronger writer.
+
+(Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of interpretive labor!")
+
+Ben thought that making them understand was hopeless and that becoming a stronger writer was a boring goal; it would be a better use of my talents to jump up an additional meta level and explain _how_ people were failing to engage. That is, I had a model of "the rationalists" that kept making bad predictions. What's going on there? Something interesting might happen if I try to explain _that_.
+
+(I guess I'm only now, after spending an additional three years exhausting every possible line of argument, taking Ben's advice on this by writing this memoir. Sorry, Ben—and thanks.)
+
+One thing I regret about my behavior during this period was the extent to which I was emotionally dependent on my posse, and in some ways particularly Michael, for validation. I remembered Michael as a high-status community elder back in the _Overcoming Bias_ era (to the extent that there was a "community" in those early days). I had been somewhat skeptical of him, then: the guy makes a lot of stridently "out there" assertions by the standards of ordinary social reality, in a way that makes you assume he must be speaking metaphorically. (He always insists that he's being completely literal.) But he had social proof as the President of the Singularity Institute—the "people person" of our world-saving effort, to complement Yudkowsky's anti-social mad scientist personality—so I had been inclined to take his "crazy"-sounding assertions more charitably than I would have in the absence of that social proof.
+
+Now, the memory of that social proof was a lifeline. Dear reader, if you've never been in the position of disagreeing with the entire weight of Society's educated opinion, _including_ your idiosyncratic subculture that tells itself a story about being smarter than the surrounding the Society—well, it's stressful. [There was a comment on /r/slatestarcodex around this time](https://old.reddit.com/r/slatestarcodex/comments/anvwr8/experts_in_any_given_field_how_would_you_say_the/eg1ga9a/) that cited Yudkowsky, Alexander, Ozy, _The Unit of Caring_, and Rob Bensinger as leaders of the "rationalist" community—just an arbitrary Reddit comment of no significance whatsoever—but it was salient indicator of the _Zeitgeist_ to me, because _[every](https://twitter.com/ESYudkowsky/status/1067183500216811521) [single](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) [one](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/) of [those](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) [people](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447)_ had tried to get away with some variant on the "categories are subjective, therefore you have no gounds to object to the claim that trans women are women" _mind game_.
+
+In the face of that juggernaut of received opinion, I was already feeling pretty gaslighted. ("We ... we had a whole Sequence about this. Didn't we? And, and ... [_you_ were there](https://tvtropes.org/pmwiki/pmwiki.php/Main/AndYouWereThere), and _you_ were there ... It—really happened, right? I didn't just imagine it? The [hyperlinks](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) [still](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) [work](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) ...") I don't know how my mind would have held up intact if I were just facing it alone; it's hard to imagine what I would have done in that case. I definitely wouldn't have had the impudence to pester Scott and Eliezer the way I did—especially Eliezer—if it was just me alone against everyone else.
+
+But _Michael thought I was in the right_—not just intellectually on the philosophy issue, but morally in the right to be _prosecuting_ the philosophy issue, and not accepting stonewalling as an answer. That social proof gave me a lot of social bravery that I otherwise wouldn't have been able to muster up—even though it would have been better if I could have propagated the implications of the observation that my dependence on him was self-undermining, because Michael himself said that the thing that made me valuable was my ability to think independently.
+
+The social proof was probably more effective in my own head, than it was with anyone we were arguing with. _I remembered_ Michael as a high-status community elder back in the _Overcoming Bias_ era, but that was a long time ago. (Luke Muelhauser had taken over leadership of the Singularity Institute in 2011; some sort of rift between Michael and Eliezer had widened in recent years, the details of which had never been explained to me.) Michael's status in "the community" of 2019 was much more mixed. He was intensely critical of the rise of Effective Altruism. (I remember at a party in 2015, on asking Michael what else I should spend my San Francisco software engineer money on if not the EA charities I was considering, being surprised that his answer was, "You.")
+
+Another blow to Michael's "community" reputation was dealt on 27 February, when Anna [published a comment badmouthing Michael and suggesting that talking to him was harmful](https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out?commentId=JLpyLwR2afav2xsyD), which I found pretty disappointing—more so as I began to realize the implications.
+
+I agreed with her point about how "ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can't) help fill these functions." That's why I was so heartbroken about about the "categories are arbitrary, therefore trans women are women" thing, which deserved to be _laughed out the room_. Why was she trying to ostracize the guy who was one of the very few to back me up on this incredibly obvious thing!? The reasons to discredit Michael given in the comment seemed incredibly weak. (He ... flatters people? He ... _didn't_ tell people to abandon their careers? What?) And the anti-Michael evidence she offered in private didn't seem much more compelling (_e.g._, at a CfAR event, he had been insistent on continuing to talk to someone who Anna thought was looking sleep-deprived and needed a break).
+
+It made sense for Anna to not like Michael, because of his personal conduct, or because he didn't like EA. (Expecting all of my friends to be friends with _each other_ would be [Geek Social Fallacy #4](http://www.plausiblydeniable.com/opinion/gsf.html).) If she didn't want to invite him to CfAR stuff, fine; that's her business not to invite him. But what did she gain from _escalating_ to publicly denouncing him as someone whose "lies/manipulations can sometimes disrupt [people's] thinking for long and costly periods of time"?!
+
+
+
+[TODO SECTION: RIP Culture War thread, and defense against alt-right categorization
+
+I wasn't the only one whose life was being disrupted by political drama in early 2019. On 22 February, Scott Alexander [posted that the /r/slatestarcodex Culture War Thread was being moved](https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/) to a new non–_Slate Star Codex_–branded subreddit in the hopes that it would hope help curb some of the harrassment he had been receiving. The problem with hosting an open discussion, Alexander explained, wasn't the difficulty of moderating obvious spam or advocacy of violence.
+
+> Your annual reminder that Slate Star Codex is not and never was alt-right, every real stat shows as much, and the primary promoters of this lie are sociopaths who get off on torturing incredibly nice targets like Scott A.
+
+ * Suppose the one were to reply: "Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (Scott Alexander's explicit advocacy of a white ethnostate?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret. You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language like 'alt-right' being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to."
+ * What direct falsehood is being asserted by Scott's detractors? I don't think anyone is claiming that, say, Scott identifies as alt-right (not even privately), any more than anyone is claiming that trans women have two X chromosomes. Sneer Club has been pretty explicit in their criticism
+ * examples:
+ * https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh0xlgx/
+ * https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh3jrth/
+ * that the Culture War thread harbors racists (&c.) and possibly that Scott himself is a secret racist, with respect to a definition of racism that includes the belief that there are genetically-mediated population differences in the distribution of socially-relevant traits and that this probably has decision-relevant consequences should be discussable somewhere.
+
+And this is just correct: e.g., "The Atomic Bomb Considered As Hungarian High School Science Fair Project" favorably cites Cochran et al.'s genetic theory of Ashkenazi achievement as "really compelling." Scott is almost certainly "guilty" of the category-membership that the speech against him is meant to convey—it's just that Sneer Club got to choose the category. The correct response to the existence of a machine-learning classifer that returns positive on both Scott Alexander and Richard Spencer is not that the classifier is "lying" (what would that even mean?), but that the classifier is not very useful for understanding Scott Alexander's effects on the world.
+
+Of course, Scott is great and we should defend him from the bastards trying to ruin his reputation, and it's plausible that the most politically convenient way to do that is to pound the table and call them lying sociopaths rather than engaging with the substance of their claims, much as how someone being tried under an unjust law might dishonestly plead "Not guilty" to save their own skin rather than tell the whole truth and hope for jury nullification.
+
+But political convenience comes at a dire cost to our common interest! There's a proverb you once failed to Google, which runs something like, "Once someone is known to be a liar, you might as well listen to the whistling of the wind."
+
+Similarly, once someone is known to vary the epistemic standards of their public statements for political convenience (even if their private, unshared thoughts continue to be consistently wise)—if they say categorizations can be lies when that happens to help their friends, but seemingly deny the possibility of categorizations being lies when that happens to make them look good ...
+
+Well, you're still somewhat better off listening to them than the whistling of the wind, because the wind in various possible worlds is presumably uncorrelated with most of the things you want to know about, whereas clever arguers who don't tell explicit lies are very constrained in how much they can mislead you. But it seems plausible that you might as well listen to any other arbitrary smart person with a blue check and 20K followers. I remain,
+ * (The claim is not that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative; it's about the _response_)
+ * "the degree to which category boundaries are being made a conscious and deliberate focus of discussion": it's a problem when category boundaries are being made a conscious and deliberate focus of discussion as an isolated-demand-for-rigor because people can't get the conclusion they want on the merits; I only started focusing on the hidden-Bayesian-structure-of-cognition part after the autogynephilia discussions kept getting derailed
+ * I know you're very busy; I know your work's important—but it might be a useful exercise? Just for a minute, to think of what you would actually say if someone with social power _actually did this to you_ when you were trying to use language to reason about Something you had to Protect?
+]
+
+
+
+
+
+
+Without disclosing any _specific content_ from private conversations with Yudkowsky that may or may not have happened, I think I am allowed to say that our posse did not get the kind of engagement from Yudkowsky that we were hoping for. (That is, I'm Glomarizing over whether Yudkowsky just didn't reply, or whether he did reply and our posse was not satisfied with the response.)
+
+Michael said that it seemed important that, if we thought Yudkowsky wasn't interested, we should have common knowledge among ourselves that we consider him to be choosing to be a cult leader.
+
+Meanwhile, my email thread with Scott got started back up again, although I wasn't expecting anything to come out of it. I expressed some regret that all the times I had emailed him over the past couple years had been when I was upset about something (like psych hospitals, or—something else) and wanted something from him, which was bad, because it was treating him as a means rather than an end—and then, despite that regret, continued prosecuting the argument.
+
+One of Alexander's [most popular _Less Wrong_ posts ever had been about the noncentral fallacy, which Alexander called "the worst argument in the world"](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world): for example, those who crow that abortion is _murder_ (because murder is the killing of a human being), or that Martin Luther King, Jr. was a _criminal_ (because he defied the segregation laws of the South), are engaging in a dishonest rhetorical maneuver in which they're trying to trick their audience into attributing attributes of the typical "murder" or "criminal" onto what are very noncentral members of those categories.
+
+_Even if_ you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Janie a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations—about Janie's moral character, about the suffering of victim whose hopes and dreams were cut short, about Janie's relationship with the law, _&c._—most of which get violated when you subsequently reveal that the murder victim was a four-week-old fetus.
+
+Thus, we see that Alexander's own "The Worst Argument in the World" is really complaining about the _same_ category-gerrymandering move that his "... Not Man for the Categories" comes out in favor of. We would not let someone get away with declaring, "I ought to accept an unexpected abortion or two deep inside the conceptual boundaries of what would normally not be considered murder if it'll save someone's life." Maybe abortion _is_ wrong, but you need to make that case _on the merits_, not by linguistic fiat.
+
+... Scott still didn't get it. He said that he didn't see why he shouldn't accept one unit of categorizational awkwardness in exchange for sufficiently large utilitarian benefits. I started drafting a long reply—but then I remembered that in recent discussion with my posse about what we might have done wrong in our attempted outreach to Yudkowsky, the idea had come up that in-person meetings are better for updateful disagreement-resolution. Would Scott be up for meeting in person some weekend? Non-urgent. Ben would be willing to moderate, unless Scott wanted to suggest someone else, or no moderator.
+
+... Scott didn't want to meet. At this point, I considered resorting to the tool of cheerful prices again, which I hadn't yet used against Scott—to say, "That's totally understandable! Would a financial incentive change your decision? For a two-hour meeting, I'd be happy to pay up to $4000 to you or your preferred charity. If you don't want the money, then sure, yes, let's table this. I hope you're having a good day." But that seemed sufficiently psychologically coercive and socially weird that I wasn't sure I wanted to go there. I emailed my posse asking what they thought—and then added that maybe they shouldn't reply until Friday, because it was Monday, and I really needed to focus on my dayjob that week.
+
+This is the part where I began to ... overheat. I tried ("tried") to focus on my dayjob, but I was just _so angry_. Did Scott _really_ not understand the rationality-relevant distinction between "value-dependent categories as a result of only running your clustering algorithm on the subspace of the configuration space spanned by the variables that are relevant to your decisions" (as explained by the _dagim_/water-dwellers _vs._ fish example) and "value-dependent categories _in order to not make my friends sad_"? I thought I was pretty explicit about this? Was Scott _really_ that dumb?? Or is it that he was only verbal-smart and this is the sort of thing that only makes sense if you've ever been good at linear algebra?? Did I need to write a post explaining just that one point in mathematical detail? (With executable code and a worked example with entropy calculations.)
+
+My dayjob boss made it clear that he was expecting me to have code for my current Jira tickets by noon the next day, so I resigned myself to stay at the office late to finish that.
+
+But I was just in so much (psychological) pain. Or at least—as I noted in one of a series of emails to my posse that night—I felt motivated to type the sentence, "I'm in so much (psychological) pain." I'm never sure how to intepret my own self-reports, because even when I'm really emotionally trashed (crying, shaking, randomly yelling, _&c_.), I think I'm still noticeably _incentivizable_: if someone were to present a credible threat (like slapping me and telling me to snap out of it), then I would be able to calm down: there's some sort of game-theory algorithm in the brain that subjectively feels genuine distress (like crying or sending people too many hysterical emails) but only when it can predict that it will be either rewarded with sympathy or at least tolerated. (Kevin Simler: [tears are a discount on friendship](https://meltingasphalt.com/tears/).)
+
+I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) to summarize how I felt (the mention of @ESYudkowsky being to attribute credit; I figured Yudkowsky had enough followers that he probably wouldn't see a notification):
+
+> "—and if you still have something to protect, so that you MUST keep going, and CANNOT resign and wisely acknowledge the limitations of rationality— [1/3]
+>
+> "—then you will be ready to start your journey[.] To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught. [2/3]
+>
+> "No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand." —@ESYudkowsky (https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science) [end/3]