+My "intent" to take a break from the religious war didn't take. I met with Anna on the UC Berkeley campus, and read her excerpts from some of Ben's and Jessica's emails. (She had not acquiesced to my request for a comment on "... Boundaries?", including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; I had figured that spamming people with hysterical and somewhat demanding physical postcards was more polite (and funnier) than my usual habit of spamming people with hysterical and somewhat demanding emails.) While we (my posse) were aghast at Yudkowsky's behavior, she was aghast at ours: reaching out to try to have a conversation with Yudkowsky, and then concluding he was a fraud because we weren't satisfied with the outcome was like hiding soldiers in an ambulance, introducing a threat against Yudkowsky in context where he had a right to be safe.
+
+I complained that I had _actually believed_ our own marketing material about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about what "the thing" was, was never actually part of the original vision. She kept repeating that she had _tried_ to warn me in previous years that public reason didn't work, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture, and then don't say them.)
+
+It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed really fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the actually-smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies-and-the-need-for-standards/) in our own tiny little bubble of the world?
+
+My frustration bubbled out into follow-up emails:
+
+> To: Anna Salamon <[redacted]>
+> Date: 7 May 2019 12:53 _p.m._
+> Subject: Re: works cited
+>
+> I'm also still pretty _angry_ about how your response to my "I believed our own propaganda" complaint is (my possibly-unfair paraphrase) "what you call 'propaganda' was all in your head; we were never _actually_ going to do the unrestricted truthseeking thing when it was politically inconvenient." But ... no! **I _didn't_ just make up the propaganda! The hyperlinks still work! I didn't imagine them! They were real! You can still click on them:** ["A Sense That More Is Possible"](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible), ["Raising the Sanity Waterline"](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline)
+>
+> Can you please _acknowledge that I didn't just make this up?_ Happy to pay you $200 for a reply to this email within the next 72 hours
+
+<p></p>
+
+> To: Anna Salamon <[redacted]>
+> Date: 7 May 2019 3:35 _p.m._
+> Subject: Re: works cited
+>
+> Or see ["A Fable of Science and Politics"](https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of-science-and-politics), where the editorial tone is pretty clear that we're supposed to be like Daria or Ferris, not Charles.
+
+(This being a parable about an underground Society polarized into factions with different beliefs about the color of the unseen sky, and how different types of people react to the discovery of a passageway to the overworld which reveals that the sky is blue. Daria (formerly of the Green faction) steels herself to accept the unpleasant truth. Ferris reacts with delighted curiosity. Charles, thinking only of preserving the existing social order and unconcerned with what the naïve would call "facts", _blocks off the passageway_.)
+
+> To: Anna Salamon <[redacted]>
+> Date: 7 May 2019 8:26 _p.m._
+> Subject: Re: works cited
+>
+> But, it's kind of bad that I'm thirty-one years old and haven't figured out how to be less emotionally needy/demanding; feeling a little bit less frame-locked now; let's talk in a few months (but offer in email-before-last is still open because rescinding it would be dishonorable)
+
+Anna said she didn't want to receive monetary offers from me anymore; previously, she had regarded my custom of throwing money at people to get what I wanted as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of "A Sense That More Is Possible", "Raising the Sanity Waterline", _&c._, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)), that it shouldn't be surprising if people steered clear of controversy.
+
+I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that _whether or not I should cut my dick off_ would _become_ a political issue. That was _new evidence_ about whether the original vision was wise! I wasn't trying to do politics with my idiosyncratic special interest; I was trying to _think seriously_ about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake, then the 2019-era "rationalists" were _worse than useless_ to me personally. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of a reference class that includes AI researchers).
+
+Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's easier to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on a canonical reading list, for obvious reasons.) You _can't_ optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you _can't_ optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
+
+[^plausibly]: Today I would say _obviously_, but at this point, I was still deep enough in my hero-worship that I wrote "plausibly".
+
+Despite Math and Wellness Month and my "intent" to take a break from the religious civil war, I kept reading _Less Wrong_ during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness).
+
+MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if there was some chance that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and commented about it: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful.
+
+[TODO: explain scuffle on "Yes Requires the Possibility"—
+
+ * Vanessa comment on hobbyhorses and feeling attacked
+ * my reply about philosophy got politicized, and MDL/atheism analogy
+ * Ben vs. Said on political speech and meta-attacks; Goldenberg on feelings
+ * 139-comment trainwreck got so bad, the mods manually moved the comments into their own thread https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
+ * based on the karma scores and what was said, this went pretty well for me and I count it as a victory
+
+]
+
+On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite _almost literally_ any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: _either_ Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, _or_ one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The mod [was persuaded on reflection](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."
+
+[TODO:
+"victories" weren't comforting when I resented this becoming a political slapfight at all—a lot of the objections in the Vanessa thread were utterly insane
+I wrote to Anna and Steven Kaas (who I was trying to "recruit" onto our side of the civil war) ]
+
+In "What You Can't Say", Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software."
+
+[TODO: I don't know if you caught the shitshow on Less Wrong, but isn't it terrifying that the person who objected was a goddamned _MIRI research associate_ ... not to demonize Vanessa because I was just as bad (if not worse) in 2008 (/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard#hair-trigger-antisexism), but in 2008 we had a culture that could _beat it out of me_]
+
+[TODO: Steven's objection:
+> the Earth's gravitational field directly hurts NASA's mission and doesn't hurt Paul Graham's mission, but NASA shouldn't spend any more effort on reducing the Earth's gravitational field than Paul Graham.
+
+I agreed that tractability needs to be addressed, but ...
+]
+
+I felt like—we were in a coal-mine, and my favorite one of our canaries just died, and I was freaking out about this, and represenatives of the Caliphate (Yudkowsky, Alexander, Anna, Steven) were like, Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission.
+
+And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect _you_ to care. The problem is not the dead bird; the problem is what the bird is _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue?
+
+Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [actually turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break.
+
+[TODO:
+ * I had posted a linkpost to "No, it's not The Incentives—it's You", which generated a lot of discussion, and Jessica (17 June) identified Ray's comments as the last straw.
+
+> LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win.
+>
+> Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a _lot_ of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it).
+
+ * posting on Less Wrong was harm-reduction; the only way to get people to stick up for truth would be to convert them to _a whole new worldview_; Jessica proposed the idea of a new discussion forum
+ * Ben thought that trying to discuss with the other mods would be a good intermediate step, after we clarified to ourselves what was going on; talking to other mods might be "good practice in the same way that the Eliezer initiative was good practice"; Ben is less optimistic about harm reduction; "Drowning Children Are Rare" was barely net-upvoted, and participating was endorsing the karma and curation systems
+ * David Xu's comment on "The Incentives" seems important?
+ * secret posse member: Ray's attitude on "Is being good costly?"
+ * Jessica: scortched-earth campaign should mostly be in meatspace social reality
+ * my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX)
+
+> I'm also not sure if I'm sufficiently clued in to what Ben and Jessica are modeling as Blight, a coherent problem, as opposed to two or six individual incidents that seem really egregious in a vaguely similar way that seems like it would have been less likely in 2009??
+
+ * _Atlas Shrugged_ Bill Brent vs. Dave Mitchum scene
+ * Vassar: "Literally nothing Ben is doing is as aggressive as the basic 101 pitch for EA."
+ * Ben: we should be creating clarity about "position X is not a strawman within the group", rather than trying to scapegoat individuals
+ * my scuffle with Ruby on "Causal vs. Social Reality"
+ * it gets worse: https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality#NbrPdyBFPi4hj5zQW
+ * Ben's comment: "Wow, he's really overtly arguing that people should lie to him to protect his feelings."
+ * Jessica: "tone arguments are always about privileged people protecting their feelings, and are thus in bad faith. Therefore, engaging with a tone argument as if it's in good faith is a fool's game, like playing chess with a pigeon. Either don't engage, or seek to embarrass them intentionally."
+ * there's no point at being mad at MOPs
+ * me (1 Jul): I'm a _little bit_ mad, because I specialize in cognitive and discourse strategies that are _extremely susceptible_ to being trolled like this
+ * "collaborative truth seeking" but (as Michael pointed out) politeness looks nothing like Aumann agreement
+ * 2 Jul: Jessica is surprised by how well "Self-consciousness wants to make everything about itself" worked; theory about people not wanting to be held to standards that others aren't being held to
+ * Michael: Jessica's example made it clear she was on the side of social justice
+ * secret posse member: level of social-justice talk makes me not want to interact with this post in any way
+]
+
+[TODO: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
+
+[TODO: "AI Timelines Scam"
+ * I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions
+ * Ben: "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp."
+ * 11 Jul: I think the law does count _mens rea_ as a thing: we do discriminate between vehicular manslaughter and first-degree murder, because traffic accidents are less disincentivizable than offing one's enemies
+ * call with Michael about GiveWell vs. the Pope
+]
+
+[TODO: secret thread with Ruby; "uh, guys??" to Steven and Anna; people say "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any _particular_ criticism as insufficiently tactful.]
+
+[TODO: "progress towards discussing the real thing"
+ * Jessica acks Ray's point of "why are you using court language if you don't intend to blame/punish"
+ * Michael 20 Jul: court language is our way of saying non-engagement isn't an option
+ * Michael: we need to get better at using SJW blamey language
+ * secret posse member: that's you-have-become-the-abyss terrifying suggestion
+ * Ben thinks SJW blame is obviously good
+]
+
+[TODO: epistemic defense meeting;
+ * I ended up crying at one point and left the room for while
+ * Jessica's summary: "Zack was a helpful emotionally expressive and articulate victim. It seemed like there was consensus that "yeah, it would be better if people like Zack could be warned somehow that LW isn't doing the general sanity-maximization thing anymore"."
+ * Vaniver admitting LW is more of a recruiting funnel for MIRI
+ * I needed to exhaust all possible avenues of appeal before it became real to me; the first morning where "rationalists ... them" felt more natural than "rationalists ... us"
+]
+
+[TODO: Michael Vassar and the theory of optimal gossip; make sure to include the part about Michael threatening to sue]
+
+[TODO: State of Steven]
+
+I still wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constraint; I was still bound by internal silencing-chains. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise the "rationalists" collectively, and—more philosophy-of-language blogging!