+If the idea of being fired from the Snodgrass campaign or being unpopular with progressives is _so_ terrifying to you that it seems analogous to getting shot, then, if those are really your true values, then sure—say whatever you need to say to keep your job and your popularity, as is personally prudent. You've set your price. But if the price you put on the intellectual integrity of your so-called "rationalist" community is similar to that of the Snodgrass for Mayor campaign, you shouldn't be surprised if intelligent, discerning people accord similar levels of credibility to the two groups' output.
+
+I see the phrase "bad faith" thrown around more than I think people know what it means. "Bad faith" doesn't mean "with ill intent", and it's more specific than "dishonest": it's [adopting the surface appearance of being moved by one set of motivations, while actually acting from another](https://en.wikipedia.org/wiki/Bad_faith).
+
+For example, an [insurance company employee](https://en.wikipedia.org/wiki/Claims_adjuster) who goes through the motions of investigating your claim while privately intending to deny it might never consciously tell an explicit "lie", but is definitely acting in bad faith: they're asking you questions, demanding evidence, _&c._ in order to _make it look like_ you'll get paid if you prove the loss occurred—whereas in reality, you're just not going to be paid. Your responses to the claim inspector aren't completely casually _inert_: if you can make an extremely strong case that the loss occurred as you say, then the claim inspector might need to put some effort into coming up with some ingenious excuse to deny your claim in ways that exhibit general claim-inspection principles. But at the end of the day, the inspector is going to say what they need to say in order to protect the company's loss ratio, as is personally prudent.
+
+With this understanding of bad faith, we can read Yudkowsky's "it is sometimes personally prudent [...]" comment as admitting that his behavior on politically-charged topics is in bad faith—where "bad faith" isn't a meaningless insult, but [literally refers](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/) to the pretending-to-have-one-set-of-motivations-while-acting-according-to-another behavior, such that accusations of bad faith can be true or false. Yudkowsky will never consciously tell an explicit "lie", but he'll go through the motions to _make it look like_ he's genuinely engaging with questions where I need the right answers in order to make extremely impactful social and medical decisions—whereas in reality, he's only going to address a selected subset of the relevant evidence and arguments that won't get him in trouble with progressives.
+
+To his credit, he _will_ admit that he's only willing to address a selected subset of arguments—but while doing so, he claims an absurd "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking" while _simultaneously_ blatantly mischaracterizing his opponents' beliefs! ("Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" doesn't pass anyone's [ideological Turing test](https://www.econlib.org/archives/2011/06/the_ideological.html).)
+
+Counterarguments aren't completely causally _inert_: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Self-Declared Gender Identity, Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But at the end of the day, Yudkowsky is going to say what he needs to say in order to protect his reputation, as is personally prudent.
+
+Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description, not a contentless attack—maybe there are some circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces. For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I take care to word my replies in a way that makes it look like I'm more ideologically aligned with them than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the _minimal_ amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior as acceptable—there _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception because I don't think my interlocutor _should_ be paying attention to my personal alignment.
+
+That is, my bad faith Twitter gambit of deceiving people about my ideological alignment in the hopes of improving the discussion seems like something that makes our collective beliefs about the topic-being-argued-about _more_ accurate. (And the topic-being-argued-about is presumably of greater collective interest than which "side" I personally happen to be on.)
+
+In contrast, Yudkowsky's bad faith gambit is the exact reverse: he's making the discussion worse in the hopes of correcting people's beliefs about his own ideological alignment. (He's not a right-wing Bad Guy, but people would tar him as a right-wing Bad Guy if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic-being-argued about; it's a _pure_ ass-covering move.
+
+Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. But the _reason_ censorship is such an effective tool in the hands of dictators like Stalin is because it ensures that many people _don't_ know—and that those who know (or suspect) don't have [game-theoretic common knowledge](https://www.lesswrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge#Dictators_and_freedom_of_speech) that others do too.
+
+Zvi Mowshowitz has [written about how the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is typically used justify deception: if "everybody knows" that we can't talk about biological sex (the reasoning goes), then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex when it would otherwise be extremely relevant.
+
+But if it were _actually_ the case that everybody knew (and everybody knew that everybody knew), then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the entire appeal and purpose of censorship is precisely that _not_ everybody knows and that someone with power wants to _keep_ it that way.
+
+For the savvy people in the know, it would certainly be _convenient_ if everyone secretly knew: then the savvy people wouldn't have to face the tough choice between
+acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath).
+
+Policy debates should not appear one-sided. Faced with this kind of dilemma, I can't say that defying Power is necessarily the right choice: if there really _were_ no other options between deceiving your readers with a bad faith performance, and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe deceiving your readers with a bad faith performance is the right thing to do.
+
+But if you actually _cared_ about not deceiving your readers, you would want to be _really sure_ that those _really were_ the only two options. You'd [spend five minutes by the clock looking for third alternatives](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative)—including, possibly, not issuing proclamations on your honor as leader of the so-called "rationalist" community on topics where you _explicitly intend to ignore counteraguments on grounds of their being politically unfavorable_. Yudkowsky rejects this alternative on the grounds that it allgedly implies "utter silence about everything Stalin has expressed an opinion on including '2 + 2 = 4' because if that logically counterfactually were wrong you would not be able to express an opposing opinion", but this seems like yet another instance of Yudkowsky playing dumb: if he _wanted_ to, I'm sure Eliezer Yudkowsky could think of _some relevant differences_ between "2 + 2 = 4" (a trivial fact of arithmetic) and "the simplest and best protocol is, "'He' refers to the set of people who have asked us to use 'he'" (a complex policy proposal whose flaws I have analyzed in detail above).
+
+
+
+["People are better off at the end of that"— _who_ is better off? We need a conflict-theoretic analysis]
+
+[I was going to save the Whole Dumb Story for a different post, and keep this post narrowly scoped to just critiquing the Feb. 2021 pronouns post, but in order to explain the problem with "Everybody knows" and "People are better off after that", I need to breifly summarize the context of this discussion which explains why _I_ didn't know and _I'm_ not better off—with the understanding that this only a brief summary, and I might tell the long version in a separate post—if it's still necessary, relative to everything else I need to get around to writing]
+
+[I _never_ expected to end up arguing about the mintuiae of pronoun conventions; I wanted to talk about the real issues]
+
+[It all started back in the 'aughts, when the occasional things about sex differences that cropped up in the Sequences (especially "Changing Emotions" and the Extropians mailing list pre) turned out to be useful for understanding what was going on with my gender thing; I wrote about this in a previous post, ["Sexual Dimorphism in Yudkowsky's Sequences, in Relation to my Gender Problems"](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/).]
+
+[But that was all about me—I assumed "trans" was a different thing. My first clue that I might not be living in that world came from—Eliezer Yudkowsky, with the "at least 20% of the ones with penises are actually women" thing]
+
+[So I ended up arguing with people about the two-type taxonomy, and I noticed that those discussions kept getting _derailed_ on some variation of "The word woman doesn't actually mean that". So I took the bait, and starting arguing against that, and then Yudkowsky comes back to the subject with his "Hill of Validity in Defense of Meaning"—and I go on a philosophy of language crusade, and Yudkowsky eventually clarifies, and _then_ he comes back _again_ in Feb. 2022 with his "simplest and best protocol"]