+I thought this one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. (Less shocking as the months rolled on, and I told myself to let the story end.) The "hill of meaning in defense of validity" affair had been been driven by Yudkowsky's pathological obsession with not-technically-lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if _that_ were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't _lying_), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if _that_ were the matter that my posse cared about—as if I should keep honoring him as my Caliph after it had been demonstrated that he hadn't _lied_). But his Sequences had articulated a _higher standard_ than merely not-lying. If he didn't remember, I could at least hope to remind everyone else.
+
+I also wrote a little post on 20 December 2019, ["Free Speech and Triskadekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk"](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to).
+
+Wei Dai had written ["Against Premature Abstraction of Political Issues"](https://www.lesswrong.com/posts/bFv8soRx6HB94p5Pg/against-premature-abstraction-of-political-issues)—itself plausibly an abstraction inspired by my philosophy-of-language blogging?—and had cited a clump of _Less Wrong_ posts about gender and pick-up artistry back in 'aught-nine as a successful debate that would have been harder to have if everyone had to obsfuscate the concrete topics of interest.
+
+A MIRI researcher, Evan Hubinger, asked:
+
+> Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I'm genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.
+
+In my reply post, I claimed that our belief that AI safety was the most important problem in the world was causally downstream from from people like Yudkowsky and Nick Bostrom trying to do good reasoning, and following lines of reasoning to where they led. The [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms) of assuming that your current agenda was the most important thing, and then distorting the process of inquiry to preserve its political untaintedness wouldn't have led us to _noticing_ the alignment problem, and I didn't think it would be sufficient to solve it.
+
+In some sense, it should be easier to have a rationality/alignment community that _just_ does systematically correct reasoning, rather than a politically-savvy community that does systematically correct reasoning _except_ when that would taint AI safety with political drama, analogously to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic _except_ that it never displays the result 13.
+
+In order to build a "triskadekaphobic calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute `6 + 7`, but also the infinite family of calculations that included 13 as an intermediate result: if you can't count on `(6 + 7) + 1` being the same as `6 + (7 + 1)`, you lose the associativity of addition. And so on. (I had the "calculator that won't display 13" analogy cached from previous email correspondence.)
+
+It could have been a comment instead of a top-level post, but I wanted to bid for the extra attention. I think, at some level, putting Hubinger's name in the post title was deliberate. It wasn't inappropriate—"Reply to Author's Name on Topic Name" is a very standard academic title format, [which](/2016/Oct/reply-to-ozy-on-agp/) [I](/2016/Nov/reply-to-ozy-on-two-type-mtf-taxonomy/) [often](/2019/Dec/reply-to-ozymandias-on-fully-consensual-gender/) [use](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/) [myself](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins)—but it also wasn't necessary, and might have been a little weird given that I was mostly using Hubinger's comment as a jumping-off point for my Free Speech for Shared Maps campaign, rather than responding point-by-point to a longer piece Hubinger might have written. It's as if the part of my brain that chose that subtitle wanted to set an example, that arguing for cowardice, being in favor of concealing information for fear of being singled out by a mob, would just get you singled out _more_.
+
+I had [an exchange with Scott Alexander in the comment section](https://www.greaterwrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to/comment/JdsknCuCuZMAo8EbP).
+
+"I know a bunch of people in academia who do various verbal gymnastics to appease the triskaidekaphobics, and when you talk to them in private they get everything 100% right," [he said](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to?commentId=mHrHTvzg8MGNH2CwB) (in a follow-up comment on 5 January 2020).
+
+I'm happy for them, I replied, but I thought the _point_ of having taxpayer-funded academic departments was so that people who _aren't_ insider experts can have accurate information with which to inform decisions?