+_ reply to David Silver on AI motivations: https://www.greaterwrong.com/posts/SbAgRYo8tkHwhd9Qx/deepmind-the-podcast-excerpts-on-agi#David_Silver_on_it_being_okay_to_have_AGIs_with_different_goals_____
+_ Charles Goodhart Elementary School (story about Reflection Sentences, and the afterschool grader being wrong
+_ Steering Towards Agreement Is Like Discouraging Betting
+_ (not really for LW) mediation with Duncan: https://www.lesswrong.com/posts/SWxnP5LZeJzuT3ccd/?commentId=KMoFiGMzxWkWJLWcN
+_ Conflict Theory of Bounded Distrust: https://astralcodexten.substack.com/p/bounded-distrust
+_ Generic Consequentialists Are Liars
+_ Comment on "The Logic of Indirect Speech" (multi-receivers vs. uncertainty about a single recieiver)
+_ There's No Such Thing as Unmotivated Reasoning
+_ "You'll Never Persuade Anyone"
+_ Longtermism + Hinge of History = Shorttermism (not my original idea, but don't remember whose blog I read this on, so I'll write my own version)
+_ Point of No Return is Relative to a Class of Interventions
+_ Simulationist Headcanon for Ontologically Basic Train Show
+_ Existential Risk Science Fiction Book Club: Rainbows End
+_ Existential Risk Science Fiction Book Club: The Gods Themselves
+_ Angelic Irony
+_ Good Bad Faith
+_ Cromwell's Rule Is About Physical Symmetry, Not Fairness
+_ The Motte-and-Bailey Doctrine as Compression Artifact
+_ The Kolmogorov Option Is Incompatible with Rationality as the Common Interest of Many Causes
+_ "Rationalists" Don't Exist (But Rationality Does)