I wasn't, in fact, satisfied. This little "not ontologically confused" clarification buried in the replies was _much less visible_ than the bombastic, arrogant top level pronouncement insinuating that resistance to gender-identity claims _was_ confused. (1 Like on this reply, _vs._ 140 Likes/21 Retweets on start of thread.) I expected that the typical reader who had gotten the impression from the initial thread that the great Eliezer Yudkowsky thought that gender-identity skeptics didn't have a leg to stand on, would not, actually, be disabused of this impression by the existence of this little follow-up. Was it greedy of me to want something _louder_?
-Greedy or not, I wasn't done flipping out. On 1 December, I wrote to Scott Alexander (ccing a few other people), asking if there was any chance of an _explicit_ and _loud_ clarification or partial-retraction of ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (Subject: "super-presumptuous mail about categorization and the influence graph"). _Forget_ my boring whining about the autogynephilia/two-types thing, I said—that's a complicated empirical claim, and _not_ the key issue.
+Greedy or not, I wasn't done flipping out. On 1 December, I wrote to Scott Alexander (cc'ing a few other people), asking if there was any chance of an _explicit_ and _loud_ clarification or partial-retraction of ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (Subject: "super-presumptuous mail about categorization and the influence graph"). _Forget_ my boring whining about the autogynephilia/two-types thing, I said—that's a complicated empirical claim, and _not_ the key issue.
The _issue_ is that category boundaries are not arbitrary (if you care about intelligence being useful): you want to [draw your category boundaries such that](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) things in the same category are similar in the respects that you care about predicting/controlling, and you want to spend your [information-theoretically limited budget](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) of short words on the simplest and most wide-rangingly useful categories.
If Galileo ever muttered "And yet it moves", there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense or some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in November 2014, but in retrospect, _maybe_ it was a _bad_ idea to build a [memetic superweapon](https://archive.is/VEeqX) that says the number of epicycles _doesn't matter_.
-And the reason to write this as a desperate email plea to Scott Alexander when I could be working on my own blog, was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like Alexander and Yudkowsky rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who didn't [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the _Zeitgeist_, the winning move is "persuade Scott Alexander".
+And the reason to write this as a desperate email plea to Scott Alexander when I could be working on my own blog, was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like Alexander and Yudkowsky rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who didn't [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the _Zeitgeist_, the winning move is "persuade Scott Alexander."
Back in 2010, the rationalist community had a shared understanding that the function of language is to describe reality. Now, we didn't. If Scott didn't want to cite my creepy blog about my creepy fetish, that was _totally fine_; I _liked_ getting credit, but the important thing is that this "No, the Emperor isn't naked—oh, well, we're not claiming that he's wearing any garments—it would be pretty weird if we were claiming _that!_—it's just that utilitarianism implies that the _social_ property of clothedness should be defined this way because to do otherwise would be really mean to people who don't have anything to wear" gaslighting maneuver needed to _die_, and he alone could kill it.
[TODO:
* Soon, other conversations continued with Michael and Sarah and Ben—and with Anna
-Michael sees a world of gaslighting and complicity
+Michael sees a world of gaslighting and complicity: try to do this justice in a few sentences, somehow?!
]
+[TODO: SECTION posse support and self-doubt—
-[TODO:
-on ostracism—
+If we had this entire posse, I felt bad and guilty and ashamed about focusing too much on my special interest except insofar as it was geniunely a proxy for "Has Eliezer and/or everyone else lost the plot, and if so, how do we get it back?"
+
+There were times during these weeks where it felt like my mind shut down with the only thought, "What am I _doing_? This is _absurd_. Why am I running around picking fights about the philosophy of language—and worse, with me arguing for the _Bad_ Guys' position? Maybe I'm wrong and should stop making a fool out of myself. After all, using Aumann-like reasoning, in a dispute of 'Zack M. Davis and Michael Vassar vs. _everyone fucking else_', wouldn't I want to bet on 'everyone else'? Obviously."
+
+Except ... I had been raised back in the 'aughts to believe that you're you're supposed to concede arguments on the basis of encountering a superior counterargument that makes you change your mind, and I couldn't actually point to one. "Maybe I'm making a fool out of myself by picking fights with all these high-status people" is _not a counterargument_.
+
+
+[TODO: section Anna and intellectual property
+Anna told me that my "You have to pass my litmus test or I lose all respect for you as a rationalist" attitude was psychologically coercive. I agreed—I was even willing to go up to "violent"—in the sense that it's [trying to apply social incentives towards an outcome rather than merely exchanging information](http://zackmdavis.net/blog/2017/03/an-intuition-on-the-bayes-structural-justification-for-free-speech-norms/). But sometimes you need to use violence in defense of self or property, even if violence is generally bad. If we think of the "rationalist" label as intellectual property, maybe it's property worth defending, and if so, then "I can define a word any way I want" isn't obviously a terrible time to start shooting at the bandits?
+
+(If someone tries to take your property and you shoot at them, you could be said to be the "aggressor" in the sense that you fired the first shot, even if you hope that the courts will uphold your property claim later.)
+]
+
+
+[TODO: SECTION on ostracism—
* There's a view that says, as long as everyone is being polite, there's no problem
* I think there's a problem where the collective discourse is biased, even if it's surface-level polite
* Berkley rats are very good at not being persecutory (we might not have been if Scott hadn't a traumatizing social-justice-shaming experience in college)
]
-
-[TODO:
+[TODO: SECTION on Ben's thinking about ostracisim
* Ben thought the bullshit nitpicking was meaningfully anti-epistemic: the game is that I have to choose between infinite interpretive labor, or being cast as "never having answered these construed-as-reasonable objections
* I was inclined to meet the objections, to say, "well, I guess I need to write faster and more clearly" rather than "you're dishonestly demanding arbitrarily large amounts of interpretive labor from me"; by meeting the objections I become a stronger writer
* Ben thought that being a better writer by responding to nitpicks from people who are trying not to understand was a boring goal; it would be a better use of my talents to explain how people were failing to engage, rather than continuing to press the object-level itself—like, I had a model of "the rationalists" that keeps making bad predictions, what's going on there?
* I guess I'm only now, years later, taking Ben's advice on this. Sorry, Ben.
]
-
-[TODO:
- * If we have this entire posse, I feel bad/guilty/ashamed about focusing too much on my special interest except insofar as it's actually a proxy for "has Eliezer and/or everyone else [lost the plot](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/), and if so, how do we get it back?"
- * There have been times when I thought, "What the Hell am I doing?" [...]
-]
-
-[TODO:
- * Anna and intellectual property
-]
-
[TODO: RIP Culture War thread, defense against alt-right categorization
* "the degree to which category boundaries are being made a conscious and deliberate focus of discussion": it's a problem when category boundaries are being made a conscious and deliberate focus of discussion as an isolated-demand-for-rigor because people can't get the conclusion they want on the merits; I only started focusing on the hidden-Bayesian-structure-of-cognition part after the autogynephilia discussions kept getting derailed
]
]
[TODO: on private universes
-
]
-
-
-
[TODO: ... continue translating email analysis into prose]
[TODO: proton concession]
-
-
-
-
> A deluge of nitpicking is an implied ostracism threat, or at least a marginalization threat. The game is that the nitpicks force you to choose between infinite interpretive labor, and "never answered these construed-as-reasonable objections" (in which case you lose standing to make the claim, and if people call you on that, they get to start talking about you as a bad rationalist, likely think of you as unreasonable, stop inviting you to things, etc.).
> It's a bit less direct than how "this is crazy talk" can be an implied imprisonment threat, but it's still strong (distributed) social aggression.
to Sarah—
> If we have this entire posse, I feel bad/guilty/ashamed about focusing too much on my special interest except insofar as it's actually a proxy for "has Eliezer and/or everyone else [lost the plot](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/), and if so, how do we get it back?"
-> There have been times during the last few weeks where it's felt like my mind has shut down with the only thought, "What the hell am I _doing_? This is _absurd_. Why am I running around picking fights about the philosophy of language—and worse, with me arguing for the _Bad_ Guys' position? I don't want to be one of the Bad Guys. Maybe I'm wrong and should stop making a fool out of myself. After all, using Aumann-like reasoning, in a dispute of 'Zack M. Davis and Michael Vassar vs. _everyone fucking else_', wouldn't I want to bet on 'everyone else'? Obviously."
-
-> Except that I was raised to believe that you're supposed to concede arguments on the basis of encountering a superior counterargument that makes you change your mind. And I can't actually point to such a counterargument. "Maybe I'm making a fool out of myself by picking fights with all these high-status people" is not a counterargument. When I look at the world, it looks like [Scott](http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) and [Eliezer](https://twitter.com/ESYudkowsky/status/1067183500216811521) and [Kelsey](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) and [Robby Bensinger](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447&comment_tracking=%7B%22tn%22%3A%22R2%22%7D) seem to think that some variation on ["I can define a word any way I want"]() is sufficient to end debates on transgender identity.
+> When I look at the world, it looks like [Scott](http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) and [Eliezer](https://twitter.com/ESYudkowsky/status/1067183500216811521) and [Kelsey](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) and [Robby Bensinger](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447&comment_tracking=%7B%22tn%22%3A%22R2%22%7D) seem to think that some variation on ["I can define a word any way I want"]() is sufficient to end debates on transgender identity.
> And ... I want to be nice to my trans friends, too, but that can't possibly be the definitive end-of-conversation correct argument. Not _really_. Not if you're being serious.
Anyway, meanwhile, other conversations were happening.
-Anna told me that my "You have to pass my litmus test or I lose all respect for you as a rationalist" attitude was psychologically coercive. I agreed—I was even willing to go up to "violent"—in the sense that it's [trying to apply social incentives towards an outcome rather than merely exchanging information](http://zackmdavis.net/blog/2017/03/an-intuition-on-the-bayes-structural-justification-for-free-speech-norms/). But sometimes you need to use violence in defense of self or property, even if violence is generally bad. If we think of the "rationalist" label as intellectual property, maybe it's property worth defending, and if so, then "I can define a word any way I want" isn't obviously a terrible time to start shooting at the bandits? What makes my "... or I lose all respect for you as a rationalist" moves worthy of your mild reproach, but "You're not allowed to call this obviously biologically-female person a woman, or I lose all respect for you as not-an-asshole" merely a puzzling sociological phenomenon that might be adaptive in some not-yet-understood way? Isn't the violence-structure basically the same? Is there any room in civilization for self-defense?
-When I told Michael about this, he said that I was ethically or 'legally' in the right here, and the rationalist equivalent of a lawyer mattered more for my claims than the equivalent of a scientist, and that Ben Hoffman (who I had already shared the thread with Scott with) would be helpful in solidifying my claims to IP defense. I said that I didn't _feel_ like I'm in the right, even if I can't point to a superior counterargument that I want to yield to, just because I'm getting fatigued from all the social-aggression I've been doing. (If someone tries to take your property and you shoot at them, you could be said to be the "aggressor" in the sense that you fired the first shot, even if you hope that the courts will uphold your property claim later.)
+
+
Michael—
> Ben once told me that conversation with me works like this. I try to say things that are literally true and people bend over backwards to pretend to agree but to think that I am being metaphorical.