+> It can also be naive to assume that all the damage that people consistently do is unintentional. For that matter, Sam by being "lol you mad" rather than "sorry" is continuing to do that damage. I'd have bought "sorry" rather a lot better, in terms of no ulterior motives.
+https://twitter.com/ESYudkowsky/status/1706861603029909508
+
+-------
+
+On 27 September 2023, Yudkowsky told Quentin Pope, "If I was given to your sort of attackiness, I'd now compose a giant LW post about how this blatant error demonstrates that nobody should trust you about anything else either." (https://twitter.com/ESYudkowsky/status/1707142828995031415) I felt like it was an OK use of bandwidth to point out that tracking reputations is sometimes useful (https://twitter.com/zackmdavis/status/1707183146335367243). My agenda here is the same as when I wrote "... on Epistemic Conduct for Author Criticism": I don't want Big Yud using his social power to delegitimize "attacks" in general, because I have an interest in attacking him. Later, he quote-Tweeted something and said,
+
+> People need to grow up reading a lot of case studies like this in order to pick of a well-calibrated instinctive sense of what ignorant criticism typically sounds like. A derisory tone is a very strong base cue, though not an invincible one.
+
+Was he subtweeting me?? (Because I was defending criticism against tone policing, and this is saying tone is a valid cue.) If it was a subtweet, I take that as vindication that my reply was a good use of bandwidth.
+
+-----
+
+In particular, I think the conspiracy theory "Yudkowsky sometimes avoids nuanced arguments that he doesn't trust people to understand" is true, because ... you've said so (e.g., "without getting into any weirdness that I don't expect Earthlings to think about validly"). https://www.greaterwrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free/comment/dMHdWcxgSpcdyG4hb
+
+----
+
+(He responded to me in this interaction, which is interesting.)
+
+https://twitter.com/ESYudkowsky/status/1708587781424046242
+> Zack, you missed this point presumably because you're losing your grasp of basic theory in favor of conspiracy theory.
+
+https://www.lesswrong.com/posts/qbcuk8WwFnTZcXTd6/thomas-kwa-s-miri-research-experience
+> The model was something like: Nate and Eliezer have a mindset that's good for both capabilities and alignment, and so if we talk to other alignment researchers about our work, the mindset will diffuse into the alignment community, and thence to OpenAI, where it would speed up capabilities.
+
+27 January 2020—
+> I'm also afraid of the failure mode where I get frame-controlled by the Michael/Ben/Jessica mini-egregore (while we tell ourselves a story that we're the real rationalist coordination group and not an egregore at all). Michael says that the worldview he's articulating would be the one that would be obvious to me if I felt that I was in danger. Insofar as I trust that my friends' mini-egregore is seeing something but I don't trust the details, the obvious path forward is to try to do original seeing while leaning into fear—trusting Michael's meta level advice, but not his detailed story.
+
+Weird tribalist praise for Scott: https://www.greaterwrong.com/posts/GMCs73dCPTL8dWYGq/use-normal-predictions/comment/ez8xrquaXmmvbsYPi