-_Should_ I have known that it wouldn't work? _Didn't_ I "already know", at some level? I guess in retrospect, the outcome does seem kind of "obvious"—that it should have been possible to predict in advance and make the corresponding update without so much fuss and wasting so many people's time.
-
-But ... it's only "obvious" if you _take as a given_ that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year. Maybe this seems banal if you haven't spent your entire life in this robot cult? But the guy doesn't _market_ himself as being like any other public intellectual in the current year. As Ben put it, Yudkowsky's "claim to legitimacy really did amount to a claim that while nearly everyone else was criminally insane (causing huge amounts of damage due to disconnect from reality, in a way that would be criminal if done knowingly), he almost uniquely was not." Call me a sucker, but ... I _actually believed_ Yudkowsky's marketing story. The Sequences _really were just that good_. That's why it took so much fuss and wasted time to generate a likelihood ratio large enough to falsify that story.
-
-Ben compared Yudkowsky to Eliza the spambot therapist in my story ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/). Scrupulous rationalists were paying rent to something claiming moral authority, which had no concrete specific plan to do anything other than run out the clock. Minds like mine don't surive long-run in this ecosystem. If we wanted minds that do "naïve" inquiry instead of playing savvy Kolmogorov games to survive, we needed an interior that justified that level of trust.
-
-[TODO: weave in "set in motion a machine" 19 Apr?]
-
-[TODO Jack—
-> Zack sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys (which I think he was correct to do conditional on email happening at all).]
-
--------
-
-curation hopes ... 22 Jun: I'm expressing a little bit of bitterness that a mole rats post got curated https://www.lesswrong.com/posts/fDKZZtTMTcGqvHnXd/naked-mole-rats-a-case-study-in-biological-weirdness
-
-"Univariate fallacy" also a concession
-(which I got to cite in https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy which I cited in "Schelling Categories")
-
-https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/
-
-"Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
-
-scuffle on LessWrong FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk
-
-"epistemic defense" meeting
-
-[TODO section on factional conflict:
-Michael on Anna as cult leader
-Jessica told me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards)
-24 Aug: I had told Anna about Michael's "enemy combatants" metaphor, and how I originally misunderstood
-me being regarded as Michael's pawn
-assortment of agendas
-mutualist pattern where Michael by himself isn't very useful for scholarship (he just says a lot of crazy-sounding things and refuses to explain them), but people like Sarah and me can write intelligible things that secretly benefited from much less legible conversations with Michael.
-]
-
-8 Jun: I think I subconsciously did an interesting political thing in appealing to my price for joining
-
-REACH panel
-
-(Subject: "Michael Vassar and the theory of optimal gossip")
-
-
-Since arguing at the object level had failed (["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/)), and arguing at the strictly meta level had failed (["... Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)), the obvious thing to do next was to jump up to the meta-meta level and tell the story about why the "rationalists" were Dead To Me now, that [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) was not being met. (Just like Ben had suggested in December and in April.)
-
-I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging!
-
-In August's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
-
-[TODO— more blogging 2019
-
-"Algorithms of Deception!" Oct 2019
-
-"Maybe Lying Doesn't Exist" Oct 2019
-
-I was _furious_ at "Against Lie Inflation"—oh, so _now_ you agree that making language less useful is a problem?! But then I realized Scott actually was being consistent in his own frame: he's counting "everyone is angrier" (because of more frequent lying-accusations) as a cost; but, if everyone _is_ lying, maybe they should be angry!
-
-"Heads I Win" Sep 2019: I was surprised by how well this did (high karma, later included in the best-of-2019 collection); Ben and Jessica had discouraged me from bothering after I
-
-"Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans
-
-]
-
-
-[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing
-15 Sep Glen Weyl apology
-]
-
-
-In November, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.
-
-I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for _what_ variables you paid attention to, to be motivated by consequences. But _given_ the subspace that's relevant to your interests, you want to run an epistemically legitimate clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences.
-
-Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues _potentially_ arise if you allow consequentialist concerns to affect your epistemics—