+On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite _almost literally_ any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: _either_ Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, _or_ one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The mod [was persuaded on reflection](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."
+
+[TODO:
+"victories" weren't comforting when I resented this becoming a political slapfight at all—a lot of the objections in the Vanessa thread were utterly insane
+I wrote to Anna and Steven Kaas (who I was trying to "recruit" onto our side of the civil war) ]
+
+In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software."
+
+[TODO: I don't know if you caught the shitshow on Less Wrong, but isn't it terrifying that the person who objected was a goddamned _MIRI research associate_ ... not to demonize Vanessa because I was just as bad (if not worse) in 2008 (/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard#hair-trigger-antisexism), but in 2008 we had a culture that could _beat it out of me_]
+
+[TODO: Steven's objection:
+> the Earth's gravitational field directly hurts NASA's mission and doesn't hurt Paul Graham's mission, but NASA shouldn't spend any more effort on reducing the Earth's gravitational field than Paul Graham.
+
+I agreed that tractability needs to be addressed, but ...
+]
+
+I felt like—we were in a coal-mine, and my favorite one of our canaries just died, and I was freaking out about this, and represenatives of the Caliphate (Yudkowsky, Alexander, Anna, Steven) were like, Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission.
+
+And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect _you_ to care. The problem is not the dead bird; the problem is what the bird is _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue?
+
+Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [actually turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break.
+
+[TODO:
+ * I had posted a linkpost to "No, it's not The Incentives—it's You", which generated a lot of discussion, and Jessica (17 June) identified Ray's comments as the last straw.
+
+> LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win.
+>
+> Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a _lot_ of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it).
+
+ * posting on Less Wrong was harm-reduction; the only way to get people to stick up for truth would be to convert them to _a whole new worldview_; Jessica proposed the idea of a new discussion forum
+ * Ben thought that trying to discuss with the other mods would be a good intermediate step, after we clarified to ourselves what was going on; talking to other mods might be "good practice in the same way that the Eliezer initiative was good practice"; Ben is less optimistic about harm reduction; "Drowning Children Are Rare" was barely net-upvoted, and participating was endorsing the karma and curation systems
+ * David Xu's comment on "The Incentives" seems important?
+ * secret posse member: Ray's attitude on "Is being good costly?"
+ * Jessica: scortched-earth campaign should mostly be in meatspace social reality
+ * my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX)
+
+> I'm also not sure if I'm sufficiently clued in to what Ben and Jessica are modeling as Blight, a coherent problem, as opposed to two or six individual incidents that seem really egregious in a vaguely similar way that seems like it would have been less likely in 2009??
+
+ * _Atlas Shrugged_ Bill Brent vs. Dave Mitchum scene
+ * Vassar: "Literally nothing Ben is doing is as aggressive as the basic 101 pitch for EA."
+ * Ben: we should be creating clarity about "position X is not a strawman within the group", rather than trying to scapegoat individuals
+ * my scuffle with Ruby on "Causal vs. Social Reality"
+ * it gets worse: https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality#NbrPdyBFPi4hj5zQW
+ * Ben's comment: "Wow, he's really overtly arguing that people should lie to him to protect his feelings."
+ * Jessica: "tone arguments are always about privileged people protecting their feelings, and are thus in bad faith. Therefore, engaging with a tone argument as if it's in good faith is a fool's game, like playing chess with a pigeon. Either don't engage, or seek to embarrass them intentionally."
+ * there's no point at being mad at MOPs
+ * me (1 Jul): I'm a _little bit_ mad, because I specialize in cognitive and discourse strategies that are _extremely susceptible_ to being trolled like this
+ * "collaborative truth seeking" but (as Michael pointed out) politeness looks nothing like Aumann agreement
+ * 2 Jul: Jessica is surprised by how well "Self-consciousness wants to make everything about itself" worked; theory about people not wanting to be held to standards that others aren't being held to
+ * Michael: Jessica's example made it clear she was on the side of social justice
+ * secret posse member: level of social-justice talk makes me not want to interact with this post in any way
+]