I think it's significant that you don't see me picking fights with—say, Paul Christiano, because Paul Christiano doesn't repeatedly take a shit on my Something to Protect, because Paul Christiano _isn't trying to be a religious leader_ (in this world where religious entrepreneurs can't afford to contradict the state religion). If Paul Christiano has opinions about transgenderism, we don't know about them. If we knew about them and they were correct, I would upvote them, and if we knew about them and they were incorrect, I would criticize them, but in either case, Christiano would not try to cultivate the impression that anyone who disagrees with him is insane. That's not his bag.
-[TODO: defying threats, cont'd—
+------
+
+Yudkowsky's political cowardice is arguably puzzling in light of his timeless decision theory's recommendations against giving in to extortion.
+
+The "arguably" is important, because randos on the internet are notoriously bad at drawing out the consequences of the theory, to the extent that Yudkowsky has said that he wishes he hadn't published—and though I think I'm smarter than the average rando, I don't expect anyone to _take my word for it_. So let me disclaim that this is _my_ explanation of how Yudkowsky's decision theory _could be interpreted_ to recommend that he behave the way I want him to, without any pretense that I'm any sort of neutral expert witness on decision theory.
+
+The idea of timeless decision theory is that you should choose the action that has the best consequences _given_ that your decision is mirrored at all the places your decision algorithm is embedded in the universe.
+
+The reason this is any different from the "causal decision theory" of just choosing the action with the best consequences (locally, without any regard to this "multiple embeddings in the universe" nonsense) is because it's possible for other parts of the universe to depend on your choices. For example, in the "Parfit's Hitchhiker" scenario, someone might give you a ride out of the desert if they _predict_ you'll pay them back later. After you've already received the ride, you might think that you can get away with stiffing them—but if they'd predicted you would do that, they wouldn't have given you the ride in the first place. Your decision is mirrored _inside the world-model every other agent with a sufficiently good knowledge of you_.
- * The cowardice is particularly puzzling in light of his timeless decision theory, which says to defy extortion.
+In particular, if you're the kind of agent that gives in to extortion—if you respond to threats of the form "Do what I want, or I'll hurt you" by doing what the threatener wants—that gives other agents an incentive to spend resources trying to extort you. On the other hand, if any would-be extortionist knows you'll never give in, they have no reason to bother trying. This is where the standard ["Don't negotiate with terrorists"](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) advice comes from.
- * Of course, there's a lot of naive misinterpretations of TDT that don't understand counterfactual dependence. There's a perspective that says, "We don't negotiate with terrorists, but we do appease bears", because the bear's response isn't calculated based on our response. /2019/Dec/political-science-epigrams/
+So, naïvely, doesn't Yudkowsky's "personally prudent to post your agreement with Stalin"[^gambit] gambit constitute giving in to an extortion threat of the form, "Support the progressive position, or we'll hurt you", which Yudkowsky's own decision theory says not to do?
- * You could imagine him mocking me for trying to reason this out, instead of just using honor. "That's right, I'm appealing to your honor, goddamn it!"
+[^gambit]: In _ways that exhibit generally rationalist principles_, natch.
+
+I can think of two reasons why the naïve objection might fail. (And who can say but that a neutral expert witness on decision theory wouldn't think of more?)
+
+First, the true decision theory is subtler than "defy anything that you can commonsensically pattern-match as looking like 'extortion'"; the case for resisting extortion specifically rests on there existing a subjunctive dependence between your decision and the extortionist's decision (they threaten _because_ you'll give in, or don't bother _because_ you won't), and the relevant subjunctive dependence doesn't obviously pertain in the real-life science intellectual _vs._ social justice mob match-up. If the mob has been trained from past experience to predict that their targets will give in, should you defy them now in order to somehow make your current situation "less real"? Depending on the correct theory of logical counterfactuals, the correct stance might be ["We don't negotiate with terrorists, but we do appease bears"](/2019/Dec/political-science-epigrams/) (because the bear's response isn't calculated based on our response), and the progressive egregore might be relevantly bear-like.
+
+On the other hand, the relevant subjunctive dependence doesn't obviously _not_ pertain, either!
+
+[TODO: defying threats, cont'd—
- * back in 'aught-nine, SingInst had made a point of prosecuting Tyler Emerson, citing decision theory
+ * Yudkowsky does seemingly back commonsensical interpretations, re voting, or how, back in 'aught-nine, SingInst had made a point of prosecuting Tyler Emerson, citing decision theory
* But the parsing of social justice as an agentic "threat" to be avoided rather than a rock to be dodged does seem to line up with the fact that people punish heretics more than infidels.