From b390e74a6c0f7f5e95fbcecc1b2211ed384d922b Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Mon, 17 Apr 2023 14:59:10 -0700 Subject: [PATCH] memoir: "Free Speech and Triskadekaphobic Calculators" --- .../drafts/if-clarity-seems-like-death-to-them.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index b749c0b..c733970 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -463,7 +463,9 @@ I also polished and pulled the trigger on ["On the Argumentative Form 'Super-Pro On _Less Wrong_, the mods had just announced [a new end-of-year Review event](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review), in which the best post from the year before would be reviewed and voted on, to see which had stood the test of time and deserved to be part of our canon of cumulative knowledge. (That is, this Review period starting in late 2019 would cover posts published in _2018_.) -This provided me with [an affordance](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review?commentId=d4RrEizzH85BdCPhE) to write some "defensive" posts, critiquing posts that had been nominated for Best-of-2018 that I didn't think deserved such glory. In response to ["Decoupling _vs._ Contextualizing Norms"](https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms) (which had been [cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/wejvnw6QnWrvbjgns)), I wrote ["Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary"](https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling), +This provided me with [an affordance](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review?commentId=d4RrEizzH85BdCPhE) to write some "defensive"[^defensive] posts, critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to ["Decoupling _vs._ Contextualizing Norms"](https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms) (which had been [cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/wejvnw6QnWrvbjgns)), I wrote ["Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary"](https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling), appealing to our [academically standard theory of how context affects meaning](https://plato.stanford.edu/entries/implicature/) to explain why "decoupling _vs._ contextualizing norms" is a false dichotomy. + +[^defensive]: Criticism is "defensive" in the sense of trying to _prevent_ new beliefs from being added to our shared map; a critic of an idea "wins" when the idea is not accepted (such that the set of accepted beliefs remains at the _status quo ante_). More significantly, in reaction to Yudkowsky's ["Meta-Honesty: Firming Up Honesty Around Its Edge Cases"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), I published ["Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), explaining why merely refraining from making false statments is an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people in practice without technically lying. (The ungainly title of my post was "softened" from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless".) @@ -475,16 +477,13 @@ Wei Dai had written ["Against Premature Abstraction of Political Issues"](https: A MIRI researcher, Evan Hubinger, asked: -> Do you think having that debate online was something that needed to happen for AI safety/​x-risk? Do you think it benefited AI safety at all? I’m genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it. - -[TODO— +> Do you think having that debate online was something that needed to happen for AI safety/​x-risk? Do you think it benefited AI safety at all? I'm genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it. -summarize "Free Speech and Triskadekaphobic Calculators" +In my reply post, I claimed that our belief that AI safety was the most important problem in the world was causally downstream from from people like Yudkowsky and Nick Bostrom trying to do good reasoning, and following lines of reasoning to where they led. The [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms) of assuming that your current agenda was the most important thing, and then distorting the process of inquiry to preserve its political untaintedness wouldn't have led us to _noticing_ the alignment problem, and I didn't think it would be sufficient to solve it. -(I had the "calculator that won't display 13" analogy cached from previous email correspondence.) +In some sense, it should be easier to have a rationality/alignment community that _just_ does systematically correct reasoning, rather than a politically-savvy community that does systematically correct reasoning _except_ when that would taint AI safety with political drama, analogously to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic _except_ that it never displays the result 13. -Wei Dai's "premature abstractions" post was explicit about the inspiration; it said "because of this conversation", and the link is to Wei's comment about my "'Worried' is an understatement" footnote on the Brent-ban post -] +In order to build a "triskadekaphobic calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute `6 + 7`, but also the infinite family of calculations that included 13 as an intermediate result: if you can't count on `(6 + 7) + 1` being the same as `6 + (7 + 1)`, you lose the associativity of addition. And so on. (I had the "calculator that won't display 13" analogy cached from previous email correspondence.) It could have been a comment instead of a top-level post, but I wanted to bid for the extra attention. I think, at some level, putting Hubinger's name in the post title was deliberate. It wasn't inappropriate—"Reply to Author's Name on Topic Name" is a very standard academic title format, [which](/2016/Oct/reply-to-ozy-on-agp/) [I](/2016/Nov/reply-to-ozy-on-two-type-mtf-taxonomy/) [often](/2019/Dec/reply-to-ozymandias-on-fully-consensual-gender/) [use](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/) [myself](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins)—but it also wasn't necessary, and might have been a little weird given that I was mostly using Hubinger's comment as a jumping-off point for my Free Speech for Shared Maps campaign, rather than responding point-by-point to a longer piece Hubinger might have written. It's as if the part of my brain that chose that subtitle wanted to set an example, that arguing for cowardice, being in favor of concealing information for fear of being singled out by a mob, would just get you singled out _more_. -- 2.17.1