+
+In particular, I think the conspiracy theory "Yudkowsky sometimes avoids nuanced arguments that he doesn't trust people to understand" is true, because ... you've said so (e.g., "without getting into any weirdness that I don't expect Earthlings to think about validly"). https://www.greaterwrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free/comment/dMHdWcxgSpcdyG4hb
+
+----
+
+(He responded to me in this interaction, which is interesting.)
+
+https://twitter.com/ESYudkowsky/status/1708587781424046242
+> Zack, you missed this point presumably because you're losing your grasp of basic theory in favor of conspiracy theory.
+
+https://www.lesswrong.com/posts/qbcuk8WwFnTZcXTd6/thomas-kwa-s-miri-research-experience
+> The model was something like: Nate and Eliezer have a mindset that's good for both capabilities and alignment, and so if we talk to other alignment researchers about our work, the mindset will diffuse into the alignment community, and thence to OpenAI, where it would speed up capabilities.
+
+27 January 2020—
+> I'm also afraid of the failure mode where I get frame-controlled by the Michael/Ben/Jessica mini-egregore (while we tell ourselves a story that we're the real rationalist coordination group and not an egregore at all). Michael says that the worldview he's articulating would be the one that would be obvious to me if I felt that I was in danger. Insofar as I trust that my friends' mini-egregore is seeing something but I don't trust the details, the obvious path forward is to try to do original seeing while leaning into fear—trusting Michael's meta level advice, but not his detailed story.
+
+Weird tribalist praise for Scott: https://www.greaterwrong.com/posts/GMCs73dCPTL8dWYGq/use-normal-predictions/comment/ez8xrquaXmmvbsYPi