+
+https://twitter.com/ESYudkowsky/status/1680166329209466881
+> I am increasingly worried about what people start to believe in after they stop believing in Me
+
+https://twitter.com/ESYudkowsky/status/1683659276471115776
+> from a contradiction one may derive anything, and this is especially true of contradicting Eliezer Yudkowsky
+
+https://twitter.com/ESYudkowsky/status/1683694475644923904
+> I don't do that sort of ridiculous drama for the same reason that Truman didn't like it. I don't have that kind of need to be at the center of the story, that I'd try to make the disaster be about myself.
+
+
+
+------
+
+I'm thinking that for controversial writing, it's not enough to get your friends to pre-read, and it's not enough to hire a pro editor, you probably also need to hire a designated "hostile prereader"
+
+
+
+--------
+
+[revision comment]
+
+The fifth- through second-to-last paragraphs of the originally published version of this post were bad writing on my part.
+
+I was summarizing things Ben said at the time that felt like an important part of the story, without adequately
+
+I've rewritten that passage. Hopefully this version is clearer.
+
+---------
+
+[reply to Wei]
+
+
+----------
+
+Ben explained: Yudkowsky had set in motion a marketing machine (the "rationalist community") that was continuing to raise funds and demand work from people for below-market rates based on the claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he, almost uniquely, was not. "Work for me or the world ends badly," basically. If the claim was true, it was important to make, and to actually extract that labor.
+
+But we had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way (which was a extremely high standard, and not a special flaw of Yudkowsky in the current environment). If, after we had _tried_ to talk to him privately, Yudkowsky couldn't be bothered to either live up to his own stated standards or withdraw his validation from the machine he built, then we had a right to talk about what we thought was going on.
+
+This wasn't about direct benefit _vs._ harm. This was about what, substantively, the machine and its operators were doing. They claimed to be cultivating an epistemically rational community, while in fact building an army of loyalists.
+
+Ben compared the whole set-up to that of Eliza the spambot therapist in my short story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/): regardless of the initial intent, scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock, maintaining a facsimile of dialogue in ways well-calibrated to continue to generate revenue. Minds like mine wouldn't survive long-term in this ecosystem. If we wanted minds that do "naïve" inquiry (instead of playing savvy power games) to live, we needed an interior that justified that level of trust.
+
+-----
+
+I mostly kept him blocked on Twitter (except when doing research for this document) to curb the temptation to pick fights, but I unblocked him in July 2023 because it was only fair to let him namesearch my promotional Tweet of pt. 2, which named him. I then ended up replying to a thread with him and Perry Metzinger, but only because I was providing relevant information, similar to how I had left a few "Less Wrong reference desk"-style messages in Eliezerfic in 2023
+
+it got 16 Likes
+https://twitter.com/zackmdavis/status/1682100362357121025
+
+I miss this Yudkowsky—
+
+https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why
+> I try in general to avoid sending my brain signals which tell it that I am high-status, just in case that causes my brain to decide it is no longer necessary. In fact I try to avoid sending my brain signals which tell it that I have achieved acceptance in my tribe. When my brain begins thinking something that generates a sense of high status within the tribe, I stop thinking that thought.
+
+----
+
+I thought I should have avoided the 2022 Valinor party to avoid running into him, but I did end up treating him in a personality-cultish way when I was actually there
+
+"Ideology is not the movement" had specifically listed trans as a shibboleth
+
+https://twitter.com/RichardDawkins/status/1684947017502433281
+> Keir Starmer agrees that a woman is an adult human female. Will Ed Davey also rejoin the real world, science & the English language by reversing his view that a woman can "quite clearly" have a penis? Inability to face reality in small things bodes ill for more serious matters.
+
+Analysis of my writing mistake
+https://twitter.com/shroomwaview/status/1681742799052341249
+
+------
+
+I got my COVID-19 vaccine (the one-shot Johnson & Johnson) on 3 April 2021, so I was able to visit "Arcadia" again on 17 April, for the first time in fourteen months.
+
+I had previously dropped by in January to deliver two new board books I had made, _Koios Blume Is Preternaturally Photogenic_ and _Amelia Davis Ford and the Great Plague_, but that had been a socially-distanced book delivery, not a "visit".
+
+The copy of _Amelia Davis Ford and the Great Plague_ that I sent to my sister in Cambridge differed slightly from the one I brought to "Arcadia". There was an "Other books by the author" list on the back cover with the titles of my earlier board books. In the Cambridge edition of _Great Plague_, the previous titles were printed in full: _Merlin Blume and the Methods of Pre-Rationality_, _Merlin Blume and the Steerswoman's Oath_, _Merlin Blume and the Sibling Rivalry_. Whereas in _Preternaturally Photogenic_ and the "Arcadia" edition of _Great Plague_, the previous titles were abbreviated: _The Methods of Pre-Rationality_, _The Steerswoman's Oath_, _The Sibling Rivalry_.
+
+The visit on the seventeenth went fine. I hung out, talked, played with the kids. I had made a double-dog promise to be on my best no-politics-and-religion-at-the-dinner-table behavior.
+
+At dinner, there was a moment when Koios bit into a lemon and made a funny face, to which a bunch of the grown-ups said "Awww!" A few moments later, he went for the lemon again. Alicorn speculated that Koios had noticed that the grown-ups found it cute the first time, and the grown-ups were chastened. "Aww, baby, we love you even if you don't bite the lemon."
+
+It was very striking to me how, in the case of the baby biting a lemon, Alicorn _immediately_ formulated the hypothesis that what-the-grownups-thought-was-cute was affecting the baby's behavior, and everyone _immediately just got it_. I was tempted to say something caustic about how no one seemed to think a similar mechanism could have accounted for some of the older child's verbal behavior the previous year, but I kept silent; that was clearly outside the purview of my double-dog promise.
+
+There was another moment when Mike made a remark about how weekends are socially constructed. I had a lot of genuinely on-topic cached witty philosophy banter about [how the social construction of concepts works](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), that would have been completely innocuous if anyone _else_ had said it, but I kept silent because I wasn't sure if it was within my double-dog margin of error if _I_ said it.
+
+> even making a baby ML dude who's about to write a terrible paper hesitate for 10 seconds and _think of the reader's reaction_ seems like a disimprovement over status quo ante.
+https://discord.com/channels/401181628015050773/458329253595840522/1006685798227267736
+
+Also, the part where I said it amounted to giving up on intellectual honesty, and he put a check mark on it
+
+The third LW bookset is called "The Carving of Reality"? Did I have counterfactual influence on that (by making that part of the sequences more memetically salient, as opposed to the "categories are made for man" strain)?
+
+Yudkowsky on EA criticism contest
+https://forum.effectivealtruism.org/posts/HyHCkK3aDsfY95MoD/cea-ev-op-rp-should-engage-an-independent-investigator-to?commentId=kgHyydoX5jT5zKqqa
+
+Yudkowsky says "we" are not to blame for FTX, but wasn't early Alameda (the Japan bitcoint arbitrage) founded as an earn-to-give scheme, and recrutied from EA?
+
+https://twitter.com/aditya_baradwaj/status/1694355639903080691
+> [SBF] wanted to build a machine—a growing sphere of influence that could break past the walls of that little office in Berkeley and wash over the world as a force for good. Not just a company, but a monument to effective altruism.
+
+Scott November 2020: "I think we eventually ended up on the same page"
+https://www.datasecretslox.com/index.php/topic,1553.msg38799.html#msg38799
+
+SK on never making a perfectly correct point
+https://www.lesswrong.com/posts/P3FQNvnW8Cz42QBuA/dialogue-on-appeals-to-consequences#Z8haBdrGiRQcGSXye
+
+Scott on puberty blockers, dreadful: https://astralcodexten.substack.com/p/highlights-from-the-comments-on-fetishes
+
+https://jdpressman.com/2023/08/28/agi-ruin-and-the-road-to-iconoclasm.html
+
+https://www.lesswrong.com/posts/BahoNzY2pzSeM2Dtk/beware-of-stephen-j-gould
+> there comes a point in self-deception where it becomes morally indistinguishable from lying. Consistently self-serving scientific "error", in the face of repeated correction and without informing others of the criticism, blends over into scientific fraud.
+
+https://time.com/collection/time100-ai/6309037/eliezer-yudkowsky/
+> "I expected to be a tiny voice shouting into the void, and people listened instead. So I doubled down on that."
+
+-----
+
+bullet notes for Tail analogy—
+ * My friend Tailcalled is better at science than me; in the hours that I've wasted with personal, political, and philosophical writing, he's actually been running surveys and digging into statistical methodology.
+ * As a result of his surveys, Tail was convinced of the two-type taxonomy, started /r/Blanchardianism, &c.
+ * Arguing with him resulted in my backing away from pure BBL ("Useful Approximation")
+ * Later, he became disillusioned with "Blanchardians" and went to war against them. I kept telling him he _is_ a "Blanchardian", insofar as he largely agrees with the main findings (about AGP as a major cause). He corresponded with Bailey and became frustrated with Bailey's ridigity. Blanchardians market themselves as disinterest truthseekers, but a lot of what they're actually doing is providing a counternarrative to social justice.
+ * There's an analogy between Tail's antipathy for Bailey and my antipathy for Yudkowsky: I still largely agree with "the rationalists", but the way especially Yudkowsky markets himself as a uniquely sane thinker
+
+Something he said made me feel spooked that he knew something about risks of future suffering that he wouldn't talk about, but in retrospect, I don't think that's what he meant.
+
+https://twitter.com/zackmdavis/status/1435856644076830721
+> The error in "Not Man for the Categories" is not subtle! After the issue had been brought to your attention, I think you should have been able to condemn it: "Scott's wrong; you can't redefine concepts in order to make people happy; that's retarded." It really is that simple! 4/6
+
+> It can also be naive to assume that all the damage that people consistently do is unintentional. For that matter, Sam by being "lol you mad" rather than "sorry" is continuing to do that damage. I'd have bought "sorry" rather a lot better, in terms of no ulterior motives.
+https://twitter.com/ESYudkowsky/status/1706861603029909508
+
+-------
+
+On 27 September 2023, Yudkowsky told Quentin Pope, "If I was given to your sort of attackiness, I'd now compose a giant LW post about how this blatant error demonstrates that nobody should trust you about anything else either." (https://twitter.com/ESYudkowsky/status/1707142828995031415) I felt like it was an OK use of bandwidth to point out that tracking reputations is sometimes useful (https://twitter.com/zackmdavis/status/1707183146335367243). My agenda here is the same as when I wrote "... on Epistemic Conduct for Author Criticism": I don't want Big Yud using his social power to delegitimize "attacks" in general, because I have an interest in attacking him. Later, he quote-Tweeted something and said,
+
+> People need to grow up reading a lot of case studies like this in order to pick of a well-calibrated instinctive sense of what ignorant criticism typically sounds like. A derisory tone is a very strong base cue, though not an invincible one.
+
+Was he subtweeting me?? (Because I was defending criticism against tone policing, and this is saying tone is a valid cue.) If it was a subtweet, I take that as vindication that my reply was a good use of bandwidth.
+
+-----
+
+In particular, I think the conspiracy theory "Yudkowsky sometimes avoids nuanced arguments that he doesn't trust people to understand" is true, because ... you've said so (e.g., "without getting into any weirdness that I don't expect Earthlings to think about validly"). https://www.greaterwrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free/comment/dMHdWcxgSpcdyG4hb
+
+----
+
+(He responded to me in this interaction, which is interesting.)
+
+https://twitter.com/ESYudkowsky/status/1708587781424046242
+> Zack, you missed this point presumably because you're losing your grasp of basic theory in favor of conspiracy theory.
+
+https://www.lesswrong.com/posts/qbcuk8WwFnTZcXTd6/thomas-kwa-s-miri-research-experience
+> The model was something like: Nate and Eliezer have a mindset that's good for both capabilities and alignment, and so if we talk to other alignment researchers about our work, the mindset will diffuse into the alignment community, and thence to OpenAI, where it would speed up capabilities.