From 8383468d2ee2a4b5043941e991a65696f177a1db Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Thu, 27 Jul 2023 17:37:31 -0700 Subject: [PATCH] "Hill of Validity": cut "Blame Me" analogy If I couldn't explain it to Said Achmiz in three tries, it can't have been that insightful. --- content/2023/a-hill-of-validity-in-defense-of-meaning.md | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/content/2023/a-hill-of-validity-in-defense-of-meaning.md b/content/2023/a-hill-of-validity-in-defense-of-meaning.md index 8483495..1624c68 100644 --- a/content/2023/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/2023/a-hill-of-validity-in-defense-of-meaning.md @@ -613,10 +613,6 @@ One might wonder why this was such a big deal to us. Okay, so Yudkowsky had prev This wasn't just anyone being wrong on the internet. In [an essay on the development of cultural traditions](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), Scott Alexander had written that rationalism is the belief that Eliezer Yudkowsky is the rightful caliph. To no small extent, I and many other people had built our lives around a story that portrayed Yudkowsky as almost uniquely sane—a story that put MIRI, CfAR, and the "rationalist community" at the center of the universe, the ultimate fate of the cosmos resting on our individual and collective mastery of the hidden Bayesian structure of cognition. -But my posse and I had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way. Maybe _he_ didn't think he had done anything wrong (because he [hadn't strictly lied](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly)), and probably a normal person would think we were making a fuss about nothing, but as far as we were concerned, the formerly rightful caliph had relinquished his legitimacy. - -Ben made an analogy to a short story I had written, ["Blame Me for Trying"](/2018/Jan/blame-me-for-trying/). In the story, an scrupulous AI clings to the advice of a therapist who is more interested in maximizing her profits than helping her clients, who routinely die after spending all their money on therapy sessions. The therapist's marketing story about improving her clients' mental health was false. Similarly, scrupulous rationalists were investing a lot of their time, effort, and money into "the community" and its institutions on a the basis of a marketing story that we now knew to be false, as political concerns took precedence over clarifying the hidden Bayesian structure of cognition. - -Minds like mine wouldn't survive long-term in this ecosystem. If we wanted minds that do "naïve" inquiry (instead of playing savvy power games) to live, we needed an interior that justified that level of trust. +But my posse and I had just falsified to our satisfaction the claim that Yudkowsky was currently sane in the relevant way. Maybe _he_ didn't think he had done anything wrong (because he [hadn't strictly lied](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly)), and probably a normal person would think we were making a fuss about nothing, but as far as we were concerned, the formerly rightful caliph had relinquished his legitimacy. A so-called "rationalist" community that couldn't clarify this matter of the cognitive function of categories was a sham. Something had to change if we wanted a place in the world for the spirit of "naïve" (rather than politically savvy) inquiry to survive. (To be continued. Yudkowsky would [eventually clarify his position on the philosophy of categorization in September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228)—but the story leading up to that will have to wait for another day.) -- 2.17.1