+> Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a _lot_ of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it).
+
+ * posting on Less Wrong was harm-reduction; the only way to get people to stick up for truth would be to convert them to _a whole new worldview_; Jessica proposed the idea of a new discussion forum
+ * Ben thought that trying to discuss with the other mods would be a good intermediate step, after we clarified to ourselves what was going on; talking to other mods might be "good practice in the same way that the Eliezer initiative was good practice"; Ben is less optimistic about harm reduction; "Drowning Children Are Rare" was barely net-upvoted, and participating was endorsing the karma and curation systems
+ * David Xu's comment on "The Incentives" seems important?
+ * secret posse member: Ray's attitude on "Is being good costly?"
+ * Jessica: scortched-earth campaign should mostly be in meatspace social reality
+ * my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX)
+
+> I'm also not sure if I'm sufficiently clued in to what Ben and Jessica are modeling as Blight, a coherent problem, as opposed to two or six individual incidents that seem really egregious in a vaguely similar way that seems like it would have been less likely in 2009??
+
+ * Vassar: "Literally nothing Ben is doing is as aggressive as the basic 101 pitch for EA."
+ * Ben: we should be creating clarity about "position X is not a strawman within the group", rather than trying to scapegoat individuals
+ * my scuffle with Ruby on "Causal vs. Social Reality" (my previous interaction with Ruby had been on the LW FAQ; maybe he couldn't let me "win" again so quickly?)
+ * it gets worse: https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality#NbrPdyBFPi4hj5zQW
+ * Ben's comment: "Wow, he's really overtly arguing that people should lie to him to protect his feelings."
+ * Jessica: "tone arguments are always about privileged people protecting their feelings, and are thus in bad faith. Therefore, engaging with a tone argument as if it's in good faith is a fool's game, like playing chess with a pigeon. Either don't engage, or seek to embarrass them intentionally."
+ * there's no point at being mad at MOPs
+ * me (1 Jul): I'm a _little bit_ mad, because I specialize in cognitive and discourse strategies that are _extremely susceptible_ to being trolled like this
+ * me to "Wilhelm" 1 Jul: "I'd rather not get into fights on LW, but at least I'm 2-0-1"
+ * "collaborative truth seeking" but (as Michael pointed out) politeness looks nothing like Aumann agreement
+ * 2 Jul: Jessica is surprised by how well "Self-consciousness wants to make everything about itself" worked; theory about people not wanting to be held to standards that others aren't being held to
+ * Michael: Jessica's example made it clear she was on the side of social justice
+ * secret posse member: level of social-justice talk makes me not want to interact with this post in any way
+]
+
+[TODO: https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/]
+
+[TODO: "AI Timelines Scam"
+ * I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions
+ * Ben: "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp."
+ * 11 Jul: I think the law does count _mens rea_ as a thing: we do discriminate between vehicular manslaughter and first-degree murder, because traffic accidents are less disincentivizable than offing one's enemies
+ * call with Michael about GiveWell vs. the Pope
+]
+
+[TODO: secret thread with Ruby; "uh, guys??" to Steven and Anna; people say "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any _particular_ criticism as insufficiently tactful.]
+
+[TODO: "progress towards discussing the real thing"
+ * Jessica acks Ray's point of "why are you using court language if you don't intend to blame/punish"
+ * Michael 20 Jul: court language is our way of saying non-engagement isn't an option
+ * Michael: we need to get better at using SJW blamey language
+ * secret posse member: that's you-have-become-the-abyss terrifying suggestion
+ * Ben thinks SJW blame is obviously good
+]
+
+[TODO: epistemic defense meeting;
+ * I ended up crying at one point and left the room for while
+ * Jessica's summary: "Zack was a helpful emotionally expressive and articulate victim. It seemed like there was consensus that "yeah, it would be better if people like Zack could be warned somehow that LW isn't doing the general sanity-maximization thing anymore"."
+ * Vaniver admitting LW is more of a recruiting funnel for MIRI
+ * I needed to exhaust all possible avenues of appeal before it became real to me; the first morning where "rationalists ... them" felt more natural than "rationalists ... us"
+]
+
+[TODO: Michael Vassar and the theory of optimal gossip; make sure to include the part about Michael threatening to sue]
+
+[TODO: State of Steven]
+
+I still wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constraint; I was still bound by internal silencing-chains. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise the "rationalists" collectively, and—more philosophy-of-language blogging!
+
+In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
+
+In September 2019's "Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists" [TODO: ... I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering]
+
+In October 2019's "Algorithms of Deception!", I explained [TODO: ...]
+
+Also in October 2019, in "Maybe Lying Doesn't Exist" [TODO: ... I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry!]
+
+------
+
+I continued to take note of signs of contemporary Yudkowsky visibly not being the same author who wrote the Sequences. In August 2019, [he Tweeted](https://twitter.com/ESYudkowsky/status/1164241431629721600):
+
+> I am actively hostile to neoreaction and the alt-right, routinely block such people from commenting on my Twitter feed, and make it clear that I do not welcome support from those quarters. Anyone insinuating otherwise is uninformed, or deceptive.
+
+[I pointed out that](https://twitter.com/zackmdavis/status/1164259164819845120) the people who smear him as a right-wing Bad Guy do so _in order to_ extract these kinds of statements of political alignment as concessions; his own timeless decision theory would seem to recommend ignoring them rather than paying even this small [Danegeld](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/).
+
+When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is actually false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension), and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By linking to ["Kolmogorov Complicity"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors?
+
+The paragraph from "Kolmogorov Complicity" that I was thinking of was (bolding mine):
+
+> Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, "If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?" **Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won't be very convincing.** Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly.
+
+I perceived a pattern where people who are in trouble with the orthodoxy feel an incentive to buy their own safety by denouncing _other_ heretics: not just disagreeing with the other heretics _because those other heresies are in fact mistaken_, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld.
+
+Suppose there are five true heresies, but anyone who's on the record believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community, because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy-of-categorization right in full generality (even though he'd [written](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) [exhaustively](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) about the correct way, and he and I have a common enemy in the social-justice egregore): _he couldn't afford to_. He'd already [spent his Overton budget on anti-feminism](https://slatestarcodex.com/2015/01/01/untitled/).
+
+Scott (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was _awfully_ tempting to try to jump there. (Of course, it would be _better_ if there was a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the falling back to the one-heresy-per-thinker equilibrium.)
+
+Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad (and that it would be bad if I were doing that without "intending" to). I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had _learned stuff_ from reading far-right authors (most notably Moldbug), and from talking with my very smart neoreactionary (and former _Less Wrong_-er) friend. I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](#less-precise-is-more-violent) (when I was talking about criticizing "rationalists").
+
+Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.lesswrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s?commentId=TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.)
+
+I did believe that Yudkowsky believed that neoreaction was not worth engaging with. I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this-and-such author." I wasn't accusing Yudkowsky of being insincere.
+
+What I _did_ think was that the need to keep up appearances of not-being-a-right-wing-Bad-Guy was a pretty serious distortion on people's beliefs, because there are at least a few questions-of-fact where believing the correct answer can, in today's political environment, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to _notice that this is a rationality problem_, and to _not actively make the problem worse_, and I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that the extent to which I think I've learned valuable things from Moldbug, made me less welcome in Yudkowsky's fiefdom.
+
+Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" _as stated_, but "I do not welcome support from those quarters" still seemed like a _pointlessly_ partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's _obviously not my fault_."
+
+Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to _also_ to acknowledge, _e.g._, racial IQ differences?
+
+I agreed that it would be helpful, but realistically, I didn't see why Yudkowsky should want to poke the race-differences hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, you either become an [evidence-filtering clever arguer](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence), or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.)
+
+It was as if there was a "Say Everything" attractor, and a "Say Nothing" attractor, and _my_ incentives were pushing me towards the "Say Everything" attractor—but that was only because I had [Something to Protect](/2019/Jul/the-source-of-our-power/) in the forbidden zone and I was a good programmer (who could therefore expect to be employable somewhere, just as [James Damore eventually found another job](https://twitter.com/JamesADamore/status/1034623633174478849)). Anyone in less extreme circumstances would find themselves being pushed to the "Say Nothing" attractor.
+
+It was instructive to compare this new disavowal of neoreaction with one from 2013 (quoted by [Moldbug](https://www.unqualified-reservations.org/2013/11/mr-jones-is-rather-concerned/) and [others](https://medium.com/@2045singularity/white-supremacist-futurism-81be3fa7020d)[^linkrot]), in response to a _TechCrunch_ article citing former MIRI employee Michael Anissimov's neoreactionary blog _More Right_:
+
+[^linkrot]: The original _TechCrunch_ comment would seem to have succumbed to [linkrot](https://www.gwern.net/Archiving-URLs#link-rot).
+
+> "More Right" is not any kind of acknowledged offspring of Less Wrong nor is it so much as linked to by the Less Wrong site. We are not part of a neoreactionary conspiracy. We are and have been explicitly pro-Enlightenment, as such, under that name. Should it be the case that any neoreactionary is citing me as a supporter of their ideas, I was never asked and never gave my consent. [...]