+[I tend to be hesitant to use the term "bad faith"](https://www.lesswrong.com/posts/e4GBj6jxRZcsHFSvP/assume-bad-faith), because I see it thrown around more than I think people know what it means, but it fits here. "Bad faith" doesn't mean "with ill intent", and it's more specific than "dishonest": it's [adopting the surface appearance of being moved by one set of motivations, while acting from another](https://en.wikipedia.org/wiki/Bad_faith).
+
+For example, an [insurance adjuster](https://en.wikipedia.org/wiki/Claims_adjuster) who goes through the motions of investigating your claim while privately intending to deny it might never consciously tell an explicit "lie" but is acting in bad faith: they're asking you questions, demanding evidence, _&c._ to make it look like you'll get paid if you prove the loss occurred—whereas in reality, you're just not going to be paid. Your responses to the claim inspector aren't casually inert: if you can make an extremely strong case that the loss occurred as you say, then the claim inspector might need to put effort into coming up with an ingenious excuse to deny your claim, in ways that exhibit general claim-inspection principles. But ultimately, the inspector is going to say what they need to say in order to protect the company's loss ratio, as is sometimes personally prudent.
+
+With this understanding of bad faith, we can read Yudkowsky's "it is sometimes personally prudent [...]" comment as admitting that his behavior on politically charged topics is in bad faith—where "bad faith" isn't a meaningless dismissal, but [literally refers](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/) to the behavior of pretending to different motivations than one does, such that accusations of bad faith can be true or false. Yudkowsky will [take care not to consciously tell an explicit "lie"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), while going through the motions to make it look like he's genuinely engaging with questions where I need the right answers in order to make extremely impactful social and medical decisions—whereas in reality, he's only going to address a selected subset of the relevant evidence and arguments that won't get him in trouble with progressives.
+
+To his credit, he will admit that he's only willing to address a selected subset of arguments—but while doing so, he claims an absurd "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking" while blatantly mischaracterizing his opponents' beliefs! ("Gendered Pronouns for Everyone and Asking To Leave the System Is Lying" doesn't pass anyone's [ideological Turing test](https://www.econlib.org/archives/2011/06/the_ideological.html).)
+
+Counterarguments aren't completely causally inert: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity (Such That Some People Perceive an Interest in Using Language Accordingly), Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But ultimately, Yudkowsky is going to say what he needs to say in order to protect his reputation with progressives, as is sometimes personally prudent.
+
+<a id="concern-trolling"></a>Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior—maybe there are circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from mine, I sometimes anticipate that if my interlocutor knew what I was thinking, they wouldn't want to talk to me, so I word my replies so that I [seem more ideologically aligned with them than I actually am](https://geekfeminism.fandom.com/wiki/Concern_troll). (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the minimal amount of strategic bad faith needed to keep the conversation going—to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. I'm willing to defend this behavior. There _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my alignment.
+
+That is, my bad faith concern-trolling gambit of misrepresenting my ideological alignment to improve the discussion seems beneficial to the accuracy of our collective beliefs about the topic. (And the topic is presumably of greater collective interest than which "side" I personally happen to be on.)
+
+In contrast, the "it is sometimes personally prudent [...] to post your agreement with Stalin" gambit is the exact reverse: it's _introducing_ a distortion into the discussion in the hopes of correcting people's beliefs about the speaker's ideological alignment. (Yudkowsky is not a right-wing Bad Guy, but people would tar him as one if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic; it's a _pure_ ass-covering move.
+
+Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. But the reason censorship is such an effective tool in the hands of dictators like Stalin is because it ensures that many people _don't_ know—and that those who know (or suspect) don't have [game-theoretic common knowledge](https://www.lesswrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge#Dictators_and_freedom_of_speech).
+
+Zvi Mowshowitz has [written about how the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is used to justify deception: if "everybody knows" that we can't talk about biological sex, then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex even when it's extremely relevant.