+Another new cite: https://applieddivinitystudies.com/2020/09/05/rationality-winning/
+
+Another new cite: https://www.reddit.com/r/slatestarcodex/comments/kdxbyd/this_blog_is_incredible/gg04f8c/ "My personal favorites among these are [ ], [... Not Man for the Categories], 10, and 2 in that order."
+
+Another new cite: https://twitter.com/rbaron321/status/1361841879445364739
+
+31 December
+"SSC also helped me understand trans issues" https://www.reddit.com/r/SneerClub/comments/kng0q4/mixed_feelings_on_scott_alexander/
+
+Still citing it (22 Mar 21): https://twitter.com/Cererean/status/1374130529667268609
+
+Still citing it (2 May 21)!!: https://eukaryotewritesblog.com/2021/05/02/theres-no-such-thing-as-a-tree/
+
+Still citing it (20 October 21): https://www.reddit.com/r/TheMotte/comments/qagtqk/culture_war_roundup_for_the_week_of_october_18/hhdiyd1/
+
+Still citing it (21 October 21): https://www.reddit.com/r/slatestarcodex/comments/qcrhc4/can_someone_provide_an_overview_ofintroduction_to/hhkf6kk/
+
+Still citing it (15 July 21) in a way that suggests it's ratsphere canon: https://twitter.com/NLRG_/status/1415754203293757445
+
+Still citing it (14 November 21): https://twitter.com/captain_mrs/status/1459846336845697028
+
+Still citing it (December 21 podcast): https://www.thebayesianconspiracy.com/2021/12/152-frame-control-with-aella/
+
+Still citing it (2 February 22): https://astralcodexten.substack.com/p/why-do-i-suck/comment/4838964
+
+Still citing it (22 March 22): https://twitter.com/postpostpostr/status/1506480317351272450
+
+Still citing it (25 March 22): https://www.reddit.com/r/TheMotte/comments/tj525b/culture_war_roundup_for_the_week_of_march_21_2022/i22z367/
+
+Still citing it (13 May 22): https://forum.effectivealtruism.org/posts/FkFTXKeFxwcGiBTwk/against-longtermist-as-an-identity
+
+Still citing it, in Eliezerfic Discord (18 Jul 22): https://discord.com/channels/936151692041400361/954750671280807968/998638253588631613
+
+Still citing it (31 Jul 22): https://www.reddit.com/r/slatestarcodex/comments/wbqtg3/rationality_irl/
+
+Still citing it (19 Sep 22): https://twitter.com/ingalala/status/1568391691064729603
+
+https://arbital.greaterwrong.com/p/logical_dt/?l=5gc
+It even leaked into Big Yud!!! "Counterfactuals were made for humanity, not humanity for counterfactuals."
+
+------
+
+If you _have_ intent-to-inform and occasionally end up using your megaphone to say false things (out of sloppiness or motivated reasoning in the passion of the moment), it's actually not that big of a deal, as long as you're willing to acknowledge corrections. (It helps if you have critics who personally hate your guts and therefore have a motive to catch you making errors, and a discerning audience who will only reward the critics for finding real errors and not fake errors.) In the long run, the errors cancel out.
+
+If you _don't_ have intent-to-inform, but make sure to never, ever say false things (because you know that "lying" is wrong, and think that as long as you haven't "lied", you're in the clear), but you don't feel like you have an obligation to acknowledge criticisms (for example, because you think you and your flunkies are the only real people in the world, and anyone who doesn't want to become one of your flunkies can be disdained as a "post-rat"), that's potentially a much worse situation, because the errors don't cancel.
+
+----
+
+
+
+bitter comments about rationalists—
+https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE
+
+(If you are silent about your pain, _they'll kill you and say you enjoyed it_.)
+
+------
+
+Yudkowsky's hyper-arrogance—
+> I aspire to make sure my departures from perfection aren't noticeable to others, so this tweet is very validating.
+https://twitter.com/ESYudkowsky/status/1384671335146692608
+
+* papal infallability / Eliezer Yudkowsky facts
+https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND
+Never go in against Eliezer Yudkowsky when anything is on the line.
+https://en.wikipedia.org/wiki/Chuck_Norris_facts
+
+"epistemic hero"
+https://twitter.com/ESYudkowsky/status/1096769579362115584
+
+https://twitter.com/ESYudkowsky/status/1434906470248636419
+> Anyways, Scott, this is just the usual division of labor in our caliphate: we're both always right, but you cater to the crowd that wants to hear it from somebody too modest to admit that, and I cater to the crowd that wants somebody out of that closet.
+
+Okay, I get that it was meant as humorous exaggeration. But I think it still has the effect of discouraging people from criticizing Scott or Eliezer because they're the leaders of the caliphate. I spent three and a half years of my life explaining in exhaustive, exhaustive detail, with math, how Scott was wrong about something, no one serious actually disagrees, and Eliezer is still using his social power to boost Scott's right-about-everything (!!) reputation. That seems really unfair, in a way that isn't dulled by "it was just a joke."
+
+Or as Yudkowsky put it—
+
+https://www.facebook.com/yudkowsky/posts/10154981483669228
+> I know that it's a bad sign to worry about which jokes other people find funny. But you can laugh at jokes about Jews arguing with each other, and laugh at jokes about Jews secretly being in charge of the world, and not laugh at jokes about Jews cheating their customers. Jokes do reveal conceptual links and some conceptual links are more problematic than others.
+
+It's totally understandable to not want to get involved in a political scuffle because xrisk reduction is astronomically more important! But I don't see any plausible case that metaphorically sucking Scott's dick in public reduces xrisk. It would be so easy to just not engage in this kind of cartel behavior!
+
+An analogy: racist jokes are also just jokes. Alice says, "What's the difference between a black dad and a boomerang? A boomerang comes back." Bob says, "That's super racist! Tons of African-American fathers are devoted parents!!" Alice says, "Chill out, it was just a joke." In a way, Alice is right. It was just a joke; no sane person could think that Alice was literally claiming that all black men are deadbeat dads. But, the joke only makes sense in the first place in context of a culture where the black-father-abandonment stereotype is operative. If you thought the stereotype was false, or if you were worried about it being a self-fulfilling prophecy, you would find it tempting to be a humorless scold and get angry at the joke-teller.
+
+Similarly, the "Caliphate" humor only makes sense in the first place in the context of a celebrity culture where deferring to Scott and Eliezer is expected behavior. (In a way that deferring to Julia Galef or John S. Wentworth is not expected behavior, even if Galef and Wentworth also have a track record as good thinkers.) I think this culture is bad. _Nullius in verba_.
+
+ [TODO: asking Anna to weigh in] (I figured that spamming people with hysterical and somewhat demanding physical postcards was more polite (and funnier) than my recent habit of spamming people with hysterical and somewhat demanding emails.)
+
+https://trevorklee.substack.com/p/the-ftx-future-fund-needs-to-slow
+> changing EA to being a social movement from being one where you expect to give money
+
+when I talked to the Kaiser psychiatrist in January 2021, he said that the drugs that they gave me in 2017 were Zyprexa 5mg and Trazadone 50mg, which actually seems a lot more reasonable in retrospect (Trazadone is on Scott's insomnia list), but it was a lot scarier in the context of not trusting the authorities
+
+I didn't have a simple, [mistake-theoretic](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) characterization of the language and social conventions that everyone should use such that anyone who defected from the compromise would be wrong. The best I could do was try to objectively predict the consequences of different possible conventions—and of _conflicts_ over possible conventions.
+
+http://archive.is/SXmol
+> "don't lie to someone if you wouldn't slash their tires" is actually a paraphrase of Steven Kaas.
+> ... ugh, I forgot that that was from the same Black Belt Bayesian post where one of the examples of bad behavior is from me that time when I aggro'd against Phil Goetz to the point were Michael threatened to get me banned. I was young and grew up in the feminist blogosphere, but as I remarked to Zvi recently, in 2008, we had a way to correct that. (Getting slapped down by Michael's ostracism threat was really painful for me at the time, but in retrospect, it needed to be done.) In the current year, we don't.
+
+
+_Less Wrong_ had recently been rebooted with a new codebase and a new dev/admin team. New-_Less Wrong_ had a system for post to be "Curated". Begging Yudkowsky and Anna to legitimize "... Boundaries?" with a comment hadn't worked, but maybe the mods would (They did end up curating [a post about mole rats](https://www.lesswrong.com/posts/fDKZZtTMTcGqvHnXd/naked-mole-rats-a-case-study-in-biological-weirdness).)
+
+
+
+
+Yudkowsky did [quote-Tweet Colin Wright on the univariate fallacy](https://twitter.com/ESYudkowsky/status/1124757043997372416)
+
+(which I got to [cite in a _Less Wrong_ post](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy)
+
+
+"Univariate fallacy" also a concession
+(which I got to cite in which I cited in "Schelling Categories")
+
+
+
+"Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
+
+scuffle on LessWrong FAQ 31 May
+
+"epistemic defense" meeting
+
+[TODO section on factional conflict:
+Michael on Anna as cult leader
+Jessica told me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards)
+24 Aug: I had told Anna about Michael's "enemy combatants" metaphor, and how I originally misunderstood
+me being regarded as Michael's pawn
+assortment of agendas
+mutualist pattern where Michael by himself isn't very useful for scholarship (he just says a lot of crazy-sounding things and refuses to explain them), but people like Sarah and me can write intelligible things that secretly benefited from much less legible conversations with Michael.
+]
+
+8 Jun: I think I subconsciously did an interesting political thing in appealing to my price for joining
+
+REACH panel
+
+(Subject: "Michael Vassar and the theory of optimal gossip")
+
+
+Since arguing at the object level had failed (["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/)), and arguing at the strictly meta level had failed (["... Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)), the obvious thing to do next was to jump up to the meta-meta level and tell the story about why the "rationalists" were Dead To Me now, that [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) was not being met. (Just like Ben had suggested in December and in April.)
+
+I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging!
+
+In August's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
+
+[TODO— more blogging 2019
+
+"Algorithms of Deception!" Oct 2019
+
+"Maybe Lying Doesn't Exist" Oct 2019
+
+I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry!
+
+"Heads I Win" Sep 2019: I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering after I
+
+"Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans
+
+]
+
+
+Scott said he liked "monastic rationalism _vs_. lay rationalism" as a frame for the schism Ben was proposing.
+
+(I wish I could use this line)
+I really really want to maintain my friendship with Anna despite the fact that we're de facto political enemies now. (And similarly with, e.g., Kelsey, who is like a sister-in-law to me (because she's Merlin Blume's third parent, and I'm Merlin's crazy racist uncle).)
+
+
+https://twitter.com/esyudkowsky/status/1164332124712738821
+> I unfortunately have had a policy for over a decade of not putting numbers on a few things, one of which is AGI timelines and one of which is *non-relative* doom probabilities. Among the reasons is that my estimates of those have been extremely unstable.