✓ lead-in to Sept. 2021 Twitter altercation
✓ out of patience email
✓ Michael Vassar and the Theory of Optimal Gossip
-_ plan to reach out to Rick / Michael on creepy men/crazy men
+✓ complicity and friendship
+✓ plan to reach out to "Ethan"
+✓ Michael on creepy men/crazy men
_ State of Steven
_ reaction to Ziz
-_ complicity and friendship
_ repair pt. 5 dath ilan transition
- Eliezerfic fight conclusion
_ mention "Darkest Timeline" and Skyrms somewhere
_ footnote explaining quibbles? (the first time I tried to write this, I hesitated, not sure if necessary)
_ "it was the same thing here"—most readers are not going to understand what I see as the obvious analogy
+_ first mention of Jack G. should introduce him properly
pt. 4 edit tier—
_ mention Nick Bostrom email scandal (and his not appearing on the one-sentence CAIS statement)
dath ilan ancillary tier—
_ Who are the 9 most important legislators called?
+_ collect Earth people sneers
things to discuss with Michael/Ben/Jessica—
_ Anna on Paul Graham
_ compression of Yudkowsky thinking reasoning wasn't useful
_ Michael's SLAPP against REACH
+_ Michael on creepy and crazy men
------
Was he subtweeting me?? (Because I was defending criticism against tone policing, and this is saying tone is a valid cue.) If it was a subtweet, I take that as vindication that my reply was a good use of bandwidth.
-----
+
+In particular, I think the conspiracy theory "Yudkowsky sometimes avoids nuanced arguments that he doesn't trust people to understand" is true, because ... you've said so (e.g., "without getting into any weirdness that I don't expect Earthlings to think about validly"). https://www.greaterwrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free/comment/dMHdWcxgSpcdyG4hb
+
+----
+
+(He responded to me in this interaction, which is interesting.)
+
+https://twitter.com/ESYudkowsky/status/1708587781424046242
+> Zack, you missed this point presumably because you're losing your grasp of basic theory in favor of conspiracy theory.
+
+https://www.lesswrong.com/posts/qbcuk8WwFnTZcXTd6/thomas-kwa-s-miri-research-experience
+> The model was something like: Nate and Eliezer have a mindset that's good for both capabilities and alignment, and so if we talk to other alignment researchers about our work, the mindset will diffuse into the alignment community, and thence to OpenAI, where it would speed up capabilities.