+_Should_ I have known that it wouldn't work? _Didn't_ I "already know", at some level?
+
+I guess in retrospect, the outcome does seem kind of "obvious"—that it should have been possible to predict in advance, and to make the corresponding update without so much fuss and wasting so many people's time.
+
+But ... it's only "obvious" if you _take as a given_ that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year. Maybe this seems banal if you haven't spent your entire adult life in his robot cult?
+
+But since I _did_ spend my entire adult life in his robot cult, the idea that Eliezer Yudkowsky was going to behave just as badly as any other public intellectual in the current year, was not really in my hypothesis space.
+
+
+"sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys (which I think he was correct to do conditional on email happening at all)"
+
+
+
+At the start, I _had_ to assume that the "hill of validity in defense of meaning" Twitter performance was an "honest mistake" in his rationality lessons, and that honest mistakes could be corrected if someone put in the effort to explain the problem.
+
+
+It took some pretty large likelihood ratios to promote the "obvious" explanation
+
+
+
+
+
+But the guy doesn't _market_ himself as being like any other public intellectual in the current year. As Ben put it, Yudkowsky's "claim to legitimacy really did amount to a claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he almost uniquely was not." Call me a sucker, but ... I _actually believed_ Yudkowsky's marketing story. The Sequences _really were just that good_. That's why it took so much fuss and wasted time to generate a likelihood ratio large enough to falsify that story.