-I also wrote a little post on 20 December 2019, ["Free Speech and Triskadekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk"](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to).
-
-Wei Dai had written ["Against Premature Abstraction of Political Issues"](https://www.lesswrong.com/posts/bFv8soRx6HB94p5Pg/against-premature-abstraction-of-political-issues)—itself plausibly an abstraction inspired by my philosophy-of-language blogging?—and had cited a clump of _Less Wrong_ posts about gender and pick-up artistry back in 'aught-nine as a successful debate that would have been harder to have if everyone had to obsfuscate the concrete topics of interest.
-
-A MIRI researcher, Evan Hubinger, asked:
-
-> Do you think having that debate online was something that needed to happen for AI safety/x-risk? Do you think it benefited AI safety at all? I'm genuinely curious. My bet would be the opposite—that it caused AI safety to be more associated with political drama that helped further taint it.
-
-In my reply post, I claimed that our belief that AI safety was the most important problem in the world was causally downstream from from people like Yudkowsky and Nick Bostrom trying to do good reasoning, and following lines of reasoning to where they led. The [cognitive algorithm](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms) of assuming that your current agenda was the most important thing, and then distorting the process of inquiry to preserve its political untaintedness wouldn't have led us to _noticing_ the alignment problem, and I didn't think it would be sufficient to solve it.
-
-In some sense, it should be easier to have a rationality/alignment community that _just_ does systematically correct reasoning, rather than a politically-savvy community that does systematically correct reasoning _except_ when that would taint AI safety with political drama, analogously to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic _except_ that it never displays the result 13.
-
-In order to build a "triskadekaphobic calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute `6 + 7`, but also the infinite family of calculations that included 13 as an intermediate result: if you can't count on `(6 + 7) + 1` being the same as `6 + (7 + 1)`, you lose the associativity of addition. And so on. (I had the "calculator that won't display 13" analogy cached from previous email correspondence.)
-
-It could have been a comment instead of a top-level post, but I wanted to bid for the extra attention. I think, at some level, putting Hubinger's name in the post title was deliberate. It wasn't inappropriate—"Reply to Author's Name on Topic Name" is a very standard academic title format, [which](/2016/Oct/reply-to-ozy-on-agp/) [I](/2016/Nov/reply-to-ozy-on-two-type-mtf-taxonomy/) [often](/2019/Dec/reply-to-ozymandias-on-fully-consensual-gender/) [use](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/) [myself](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins)—but it also wasn't necessary, and might have been a little weird given that I was mostly using Hubinger's comment as a jumping-off point for my Free Speech for Shared Maps campaign, rather than responding point-by-point to a longer piece Hubinger might have written. It's as if the part of my brain that chose that subtitle wanted to set an example, that arguing for cowardice, being in favor of concealing information for fear of being singled out by a mob, would just get you singled out _more_.
-
-I had [an exchange with Scott Alexander in the comment section](https://www.greaterwrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to/comment/JdsknCuCuZMAo8EbP).
-
-"I know a bunch of people in academia who do various verbal gymnastics to appease the triskaidekaphobics, and when you talk to them in private they get everything 100% right," [he said](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to?commentId=mHrHTvzg8MGNH2CwB) (in a follow-up comment on 5 January 2020).
-
-I'm happy for them, I replied, but I thought the _point_ of having taxpayer-funded academic departments was so that people who _aren't_ insider experts can have accurate information with which to inform decisions?
-
------
-
-During a phone call around early December 2019, Michael had pointed out that since [MIRI's 2019 fundraiser](https://intelligence.org/2019/12/02/miris-2019-fundraiser/) was going on, and we had information about how present-day MIRI differed from its marketing story, there was a time-sensitive opportunity to reach out to a perennial major donor, whom I'll call "Ethan", and fill him in on what we thought we knew about the Blight.
-
-I wrote to Jessica and Jack Gallagher (cc'ing Michael) on 14 December asking how we should organize this. (Jessica and Jack had relevant testimony about working at MIRI, which would be of more central interest to "Ethan" than my story about how the "rationalists" had lost their way.) Michael mentioned "Tabitha", a lawyer who had been in the MIRI orbit for a long time, as another person to talk to.
-
-On 22 December, I apologized, saying that I wanted to postpone setting up the meeting, partially because I was on a roll with my productive blogging spree, and partially for a psychological reason: I was feeling subjective pressure to appease Michael by doing the thing that he explicitly suggested because of my loyalty to him. But that would be wrong, because Michael's ideology said that people should follow their sense of opportunity rather than obeying orders. I might feel motived to reach out to "Ethan" and "Tabitha" in January.
-
-Michael said that that implied my sense of opportunity was driven by politics, and that I believed that simple honesty couldn't work. I wasn't sure about this. It seemed like any conversation with "Ethan" and "Tabitha" would be partially optimized to move money, which I thought was politics.
-
-Jessica pointed out that "it moves money, so it's political" was erasing the non-zero-sum details of the situation. If people can make better decisions (including monetary ones) with more information, then informing them was pro-social. If there wasn't any better decisionmaking from information to be had, and it was just a matter of exerting social pressure in favor of one donation target over another, then that would be politics.
-
-I agreed that my initial "it moves money so it's political" intuition was wrong. But I didn't think I knew how to inform people about giving decisions in an honest and timely way, because the arguments [written above the bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) were an entire traumatic worldview shift. You couldn't just say "CfAR is fraudulent, don't give to them" without explaining things like ["bad faith is a disposition, not a feeling"](http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/) as prerequisites. I felt more comfortable trying to share the worldview update in January even if it meant the December decision would be wrong, because I didn't know how to affect the December decision in a way that didn't require someone to trust my judgement.
-
-Michael wrote:
-
-> That all makes sense to me, but I think that it reduces to "political processes are largely processes of spontaneous coordination to make it impossible to 'just be honest' and thus to force people to engage in politics themselves. In such a situation one is forced to do politics in order to 'just be honest', even if you would greatly prefer not to".
->
-> This is surely not the first time that you have heard about situations like that.