-[TODO "Challenges"
- * the essential objections: you can't have it both ways; we should _model the conflict_ instead of taking a side in it while pretending to be neutral
- * eventually shoved out the door in March
- * I flip-flopped back and forth a lot about whether to include the coda about the political metagame, or to save it for the present memoir; I eventually decided to keep the post object-level
- * I felt a lot of trepidation publishing a post that said, "I'm better off because I don't trust Eliezer Yudkowsky to tell the truth"
- * Critical success! Oli's comment
- * I hoped he saw it (but I wasn't going to email or Tweet at him about it, in keeping with my intent not to bother the guy anymore)
+> [...] basically everything in this post strikes me as "obviously true" and I had a very similar reaction to what the OP says now, when I first encountered the Eliezer Facebook post that this post is responding to.
+>
+> And I do think that response mattered for my relationship to the rationality community. I did really feel like at the time that Eliezer was trying to make my map of the world worse, and it shifted my epistemic risk assessment of being part of the community from "I feel pretty confident in trusting my community leadership to maintain epistemic coherence in the presence of adversarial epistemic forces" to "well, I sure have to at least do a lot of straussian reading if I want to understand what people actually believe, and should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false in order to not have to deal with complicated political issues".
+>
+> I do think that was the right update to make, and was overdetermined for many different reasons, though it still deeply saddens me.
+
+Brutal! Recall that Yudkowsky's justification for his behavior had been that "it is sometimes personally prudent and _not community-harmful_ to post your agreement with Stalin" (emphasis mine), and here we had the administrator of Yudkowsky's _own website_ saying that he's deeply saddened that he now expects Yudkowsky to _make up sophisticated stories for why pretty obviously true things are false_ (!!).
+
+Is that ... _not_ evidence of harm to the community? If that's not community-harmful in Yudkowsky's view, then what would be example of something that _would_ be? _Reply, motherfucker!_
+
+... or rather, "Reply, motherfucker", is what I fantasized about being able to say to Yudkowsky, if I hadn't already expressed an intention not to bother him anymore.
+
+[TODO: the Death With Dignity era April 2022
+
+"Death With Dignity" isn't really an update; he used to refuse to give a probability but that FAI was "impossible", and now he says the probability is ~0
+
+https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible
+
+ * swimming to shore analogy
+
+ * I've believed since Kurzweil that technology will remake the world sometime in the 21th century; it's just "the machines won't replace us, because we'll be them" doesn't seem credible
+
+ * I agree that it would be nice if Earth had a plan; it would be nice if people figured out the stuff Yudkowsky did earlier; Asimov wrote about robots and psychohistory, but he still portrayed a future galaxy populated by humans, which seems so silly now
+
+/2017/Jan/from-what-ive-tasted-of-desire/