This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's reply that this usage was [motte-and-baileying](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) between different senses of _fraud_. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge.
-Looking back four years later, I still feel that way—but my desire for nuance itself demands nuance.
-
-[TODO— FTX and nuance epilogue—
-On the one hand, I think I'm right to worry that our posse's discourse was prone to a "jump to evaluation" failure mode,
- * If Gloria does a crime and lies about it and you call her a fraud, people are going to correctly notice that your description failed to match reality; you're obscuring what's actually bad about it
- * my mentioning my CfAR donation to Said actually belongs here
-]
-
-On the other hand, I want to give the posse's worldview massive credit for seeing things that everyone else in "rationalist" Berkeley prefers not to see. Trying to describe the Blight to me in April 2019, Ben wrote, "People are systematically conflating corruption, accumulation of dominance, and theft, with getting things done." I imagine an ordinary EA grown-up looking at this text and shaking their head at how hyperbolically uncharitable Ben was being. Dominance, corruption, theft? Where was his evidence for these sweeping attacks on these smart, hard-working people trying to make the world a better place?
-
-But look at [the implosion of the FTX cryptocurrency exchange](https://en.wikipedia.org/wiki/Bankruptcy_of_FTX). This was one of the largest financial frauds of our time, and it was made possible by EA. In _Going Infinite_, Michael Lewis's book on FTX mastermind Sam Bankman-Fried, Lewis describes Bankman-Fried's "access to a pool of willing effective altruists" as the secret weapon of FTX predecessor Alameda Research: Wall Street firms powered by ordinary greed would have trouble trusting employees with easily-stolen cryptocurrency, but ideologically-driven EAs could be trusted to be working for the cause. Lewis describes Alameda employees seeking to prevent Bankman-Fried from deploying a trading bot with access to $170 million for fear of losing all that money "that might otherwise go to effective altruism".
-
-[TODO—
- * as Zvi notes in his review of _Going Infinite_, https://thezvi.wordpress.com/2023/10/24/book-review-going-infinite/
- * tie into specific cites in Ben's EA-critical writing
-]
-
-------
On 12 and 13 November 2019, Ziz [published](https://archive.ph/GQOeg) [several](https://archive.ph/6HsvS) [blog](https://archive.ph/jChxP) [posts](https://archive.ph/TPei9) laying out [her](/2019/Oct/self-identity-is-a-schelling-point/) grievances against MIRI and CfAR. On the fifteenth, Ziz and three collaborators staged a protest at the CfAR reunion being held at a retreat center in the North Bay near Camp Meeker. A call to the police falsely alleged that the protesters had a gun, [resulting in a](http://web.archive.org/web/20230316210946/https://www.pressdemocrat.com/article/news/deputies-working-to-identify-suspects-in-camp-meeker-incident/) [dramatic police reaction](http://web.archive.org/web/20201112041007/https://www.pressdemocrat.com/article/news/authorities-id-four-arrested-in-westminster-woods-protest/) (SWAT team called, highway closure, children's group a mile away being evacuated—the works).
[^not-lying-title]: The ungainly title was "softened" from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless".
-I thought this one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. (Less shocking as the months rolled on, and I told myself to let the story end.) The "hill of meaning in defense of validity" affair had been been driven by Yudkowsky's pathological obsession with not-technically-lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if _that_ were the matter that anyone cared about—as if conservatives and gender-critical feminists would just pack up and go home after it had been demonstrated that trans people aren't _lying_), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if _that_ were the matter that my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't lied). But his Sequences had [articulated a higher standard](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument) than merely not-lying. If he didn't remember, I could at least hope to remind everyone else.
+I thought this one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. (Less shocking as the months rolled on, and I told myself to let the story end.) The "hill of meaning in defense of validity" affair had been been driven by Yudkowsky's pathological obsession with not-technically-lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if _that_ were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't _lying_), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if _that_ were the matter that my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't lied). But his Sequences had [articulated a higher standard](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument) than merely not-lying. If he didn't remember, I could at least hope to remind everyone else.
I also wrote a little post, ["Free Speech and Triskadekaphobic Calculators"](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to), arguing that it should be easier to have a rationality/alignment community that just does systematically correct reasoning, rather than a politically-savvy community that does systematically correct reasoning except when that would taint AI safety with political drama, analogously to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic except that it never displays the result 13. In order to build a "[triskadekaphobic](https://en.wikipedia.org/wiki/Triskaidekaphobia) calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute `6 + 7`, but also the infinite family of calculations that included 13 as an intermediate result: if you can't count on `(6 + 7) + 1` being the same as `6 + (7 + 1)`, you lose the associativity of addition.