-Milestone—
-_ Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems
-_ Unnatural Categories Are Optimized for Deception (LW)
-_ Motivation and Political Context for My Philosophy of Language Agenda
-_ Feedback on Unnatural Categories (working group + TurnTrout + Rich + Tetra + Steven + Said + Elizabeth (https://www.greaterwrong.com/posts/rhZge5qmZKwJ8BMDM/open-and-welcome-thread-march-2020#comment-XG44nfNcpq3BXj7hB) + paid-editor)
+_ How do unwind one's "rationalist" social identity?
+_ Daniel C. Dennett on Near-Term Alignment Problems
+
+_ Honesty Is Activism
+
+_ agents with different learning algorithms would find it hard to agree on words?
+
+Trading Political Favors Doesn't Build True Maps, But Correcting Errors Your Yourself Made, Does
+
+(Rationalists don't exist)
+> how to be a strong rationalist
+https://www.greaterwrong.com/posts/4thPHxgBCvteQjLv6/against-context-free-integrity
+
+_ A Hill of Validity in Defense of Meaning / Casuistry Is Unbecoming: Replies to Eliezer Yudkowsky (I don't know how to do this)
+
+_ E.Y. as case study on unconscious-lying and self-report ("their verbal theories contradict their own datapoints" https://www.facebook.com/yudkowsky/posts/10159408250519228?comment_id=10159411435619228&reply_comment_id=10159411567794228)
+
+
+https://en.wikipedia.org/wiki/Zersetzung and the GPT-3 neuron about Trump: imagine a short story, where you suddenly become more villainous in the eyes of the narrative, and it's because a greater fraction of your measure is in simulations written based on an unfavorable view of your legacy (you're only aware of this because the GPT-descendant is smart enough to try to correct for it)
+
+philosophy-of-language replies,