From: M. Taylor Saotome-Westlake Date: Wed, 26 Oct 2022 05:01:26 +0000 (-0700) Subject: memoir: yank out a pt. 4 X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=be5246fbaca3b687c87cd889221aaaeb837bda18;p=Ultimately_Untrue_Thought.git memoir: yank out a pt. 4 Something feels wrong about the thought of mirroring a 50K word post on Less Wrong. People would have to scroll down too far to read the comments, and I want people to read the comments? Page load time on that bloated JavaScript site? (Shouldn't be an issue; Jessica's dramapost is slow to load, but that's almost certainly because of the db JOINing all those comments, not because there's too much text.) For whatever reason, a 35K word post feels better. Not entirely sure about the title or the epigraph. --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index a613dcd..4e1ff52 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -1139,529 +1139,3 @@ I followed it up with another email after I woke up the next morning: https://www.facebook.com/yudkowsky/posts/10158853851009228 _ex cathedra_ statement that gender categories are not an exception to the rule, only 1 year and 8 months after asking for it ] - -[TODO: Sasha disaster, breakup with Vassar group] - -[TODO: "Unnatural Categories Are Optimized for Deception" - -Abram was right - -the fact that it didn't means that not tracking it can be an effective AI design! Just because evolution takes shortcuts that human engineers wouldn't doesn't mean shortcuts are "wrong" (instead, there are laws governing which kinds of shortcuts work). - -Embedded agency means that the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'my own' code." In that light, it makes sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about - -somehow accuracy seems more fundamental than power or resources ... could that be formalized? -] - -And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satsified. I still published "Unnatural Categories Are Optimized for Deception" in January 2021, but if I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war. - -[TODO: NYT affair and Brennan link -https://astralcodexten.substack.com/p/statement-on-new-york-times-article -https://reddragdiva.tumblr.com/post/643403673004851200/reddragdiva-topher-brennan-ive-decided-to-say -https://www.facebook.com/yudkowsky/posts/10159408250519228 - -Scott Aaronson on the Times's hit piece of Scott Alexander— -https://scottaaronson.blog/?p=5310 -> The trouble with the NYT piece is not that it makes any false statements, but just that it constantly insinuates nefarious beliefs and motives, via strategic word choices and omission of relevant facts that change the emotional coloration of the facts that it does present. - -] - -... except that Yudkowsky reopened the conversation in February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions and concluding that, "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_." - -(_Why?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging?) - -I explained what's wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), but I find myself still having more left to analyze. The February 2021 post on pronouns is a _fascinating_ document, in its own way—a penetrating case study on the effects of politics on a formerly great mind. - -Yudkowsky begins by setting the context of "[h]aving received a bit of private pushback" on his willingness to declare that asking someone to use a different pronoun is not lying. - -But ... the _reason_ he got a bit ("a bit") of private pushback was _because_ the original "hill of meaning" thread was so blatantly optimized to intimidate and delegitimize people who want to use language to reason about biological sex. The pushback wasn't about using trans people's preferred pronouns (I do that, too), or about not wanting pronouns to imply sex (sounds fine, if we were in the position of defining a conlang from scratch); the _problem_ is using an argument that's ostensibly about pronouns to sneak in an implicature ("Who competes in sports segregated around an Aristotelian binary is a policy question [ ] that I personally find very humorous") that it's dumb and wrong to want to talk about the sense in which trans women are male and trans men are female, as a _fact about reality_ that continues to be true even if it hurts someone's feelings, and even if policy decisions made on the basis of that fact are not themselves a fact (as if anyone had doubted this). - -In that context, it's revealing that in this post attempting to explain why the original thread seemed like a reasonable thing to say, Yudkowsky ... doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size." - -But ... I've never _measured_ how big someone's gametes are, have you? We can only _infer_ whether strangers' bodies are configured to produce small or large gametes by observing [a variety of correlated characteristics](https://en.wikipedia.org/wiki/Secondary_sex_characteristic). Furthermore, for trans people who don't pass but are visibly trying to, one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex. - -Thus, Yudkowsky's "default for those-who-haven't-asked that goes by gamete size" clause _can't be taken literally_. The only way I can make sense of it is to interpret it as a way to point at the prevailing reality that people are good at noticing what sex other people are, but that we want to be kind to people who are trying to appear to be the other sex, without having to admit to it. - -One could argue that this is hostile nitpicking on my part: that the use of "gamete size" as a metonym for sex here is either an attempt to provide an unambiguous definition (because if you said _female_ or _male sex_, someone could ask what you meant by that), or that it's at worst a clunky choice of words, not an intellectually substantive decision that can be usefully critiqued. - -But the claim that Yudkowsky is only trying to provide an unambiguous definition isn't consistent with the text's claim that "[i]t would still be logically rude to demand that other people use only your language system and interpretation convention in order to communicate, in advance of them having agreed with you about the clustering thing". And the post also seems to suggest that the motive isn't to avoid ambiguity. Yudkowsky writes: - -> In terms of important things? Those would be all the things I've read—from friends, from strangers on the Internet, above all from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate, or perhaps at all. -> -> And I'm not happy that the very language I use, would try to force me to take a position on that; not a complicated nuanced position, but a binarized position, _simply in order to talk grammatically about people at all_. - -What does the "tossed into a bucket" metaphor refer to, though? I can think of many different things that might be summarized that way, and my sympathy for the one who does not like to be tossed into a bucket depends on a lot on exactly what real-world situation is being mapped to the bucket. - -If we're talking about overt _gender role enforcement attempts_—things like, "You're a girl, therefore you need to learn to keep house for your future husband", or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket. - -(There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt-out of the default buckets, without causing too much trouble.) - -But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.4; your expectation that I be able to toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to recommend I get a prostate cancer screening on account of my being male and therefore at risk for prostate cancer, it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that. - -While piously appealing to the feelings of people describing reasons they do not want to be tossed into a Male Bucket or a Female Bucket, Yudkowsky does not seem to be distinguishing between reasons that have merit, and reasons that do not have merit. The post continues (bolding mine): - -> In a wide variety of cases, sure, ["he" and "she"] can clearly communicate the unambiguous sex and gender of something that has an unambiguous sex and gender, much as a different language might have pronouns that sometimes clearly communicated hair color to the extent that hair color often fell into unambiguous clusters. -> -> But if somebody's hair color is halfway between two central points? If their civilization has developed stereotypes about hair color they're not comfortable with, such that they feel that the pronoun corresponding to their outward hair color is something they're not comfortable with because they don't fit key aspects of the rest of the stereotype and they feel strongly about that? If they have dyed their hair because of that, or **plan to get hair surgery, or would get hair surgery if it were safer but for now are afraid to do so?** Then it's stupid to try to force people to take complicated positions about those social topics _before they are allowed to utter grammatical sentences_. - -So, I agree that a language convention in which pronouns map to hair color doesn't seem great, and that the people in this world should probably coordinate on switching to a better convention, if they can figure out how. - -But taking as given the existence of a convention in which pronouns refer to hair color, a demand to be refered to as having a hair color _that one does not in fact have_ seems pretty outrageous to me! - -It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of _genuine_ nuance brought on by a _genuine_ challenge to a system that _falsely_ assumes discrete hair colors. - -But ... "plan to get hair surgery"? "Would get hair surgery if it were safer but for now are afraid to do so"? In what sense do these cases present a challenge to the discrete system and therefore call for complication and nuance? There's nothing ambiguous about these cases: if you haven't, in fact, changed your hair color, then your hair is, in fact, its original color. The decision to get hair surgery does not _propagate backwards in time_. The decision to get hair surgery cannot be _imported from a counterfactual universe in which it is safer_. People who, today, do not have the hair color that they would prefer, are, today, going to have to deal with that fact _as a fact_. - -Is the idea that we want to use the same pronouns for the same person over time, so that if we know someone is going to get hair surgery—they have an appointment with the hair surgeon at this-and-such date—we can go ahead and switch their pronouns in advance? Okay, I can buy that. - -But extending that to the "would get hair surgery if it were safer" case is _absurd_. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive for insisting on complication and nuance is as an obfuscation tactic— - -Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? - -Maybe the problem is easier to see in the context of a non-gender example. [My previous hopeless ideological war—before this one—was against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I _hated_ being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all. - -I sometimes describe myself as "gender dysphoric", because our culture doesn't have better widely-understood vocabulary for my beautiful pure sacred self-identity thing, but if we're talking about suffering and emotional distress, my "student dysphoria" was _vastly_ worse than any "gender dysphoria" I've ever felt. - -But crucially, my tirades against the Student Bucket described reasons not just that _I didn't like it_, but reasons that the bucket was _actually wrong on the empirical merits_: people can and do learn important things by studying and practicing out of their own curiosity and ambition; the system was _actually in the wrong_ for assuming that nothing you do matters unless you do it on the command of a designated "teacher" while enrolled in a designated "course". - -And _because_ my war footing was founded on the empirical merits, I knew that I had to _update_ to the extent that the empirical merits showed that I was in the wrong. In 2010, I took a differential equations class "for fun" at the local community college, expecting to do well and thereby prove that my previous couple years of math self-study had been the equal of any schoolstudent's. - -In fact, I did very poorly and scraped by with a _C_. (Subjectively, I felt like I "understood the concepts", and kept getting surprised when that understanding somehow didn't convert into passing quiz scores.) That hurt. That hurt a lot. - -_It was supposed to hurt_. One could imagine a Jane Austen character in this situation doubling down on his antagonism to everything school-related, in order to protect himself from being hurt—to protest that the teacher hated him, that the quizzes were unfair, that the answer key must have had a printing error—in short, that he had been right in every detail all along, and that any suggestion otherwise was credentialist propaganda. - -I knew better than to behave like that—and to the extent that I was tempted, I retained my ability to notice and snap out of it. My failure _didn't_ mean I had been wrong about everything, that I should humbly resign myself to the Student Bucket forever and never dare to question it again—but it _did_ mean that I had been wrong about _something_. I could [update myself incrementally](https://www.lesswrong.com/posts/627DZcvme7nLDrbZu/update-yourself-incrementally)—but I _did_ need to update. (Probably, that "math" encompasses different subskills, and that my glorious self-study had unevenly trained some skills and not others: there was nothing contradictory about my [successfully generalizing one of the methods in the textbook to arbitrary numbers of variables](https://math.stackexchange.com/questions/15143/does-the-method-for-solving-exact-des-generalize-like-this), while _also_ [struggling with the class's assigned problem sets](https://math.stackexchange.com/questions/7984/automatizing-computational-skills).) - -Someone who uncritically validated my not liking to be tossed into the Student Bucket, instead of assessing my _reasons_ for not liking to be tossed into the Bucket and whether those reasons had merit, would be hurting me, not helping me—because in order to navigate the real world, I need a map that reflects the territory, rather than my narcissistic fantasies. I'm a better person for straightforwardly facing the shame of getting a _C_ in community college differential equations, rather than trying to deny it or run away from it or claim that it didn't mean anything. Part of updating myself incrementally was that I would get _other_ chances to prove that my autodidacticism _could_ match the standard set by schools. (My professional and open-source programming career obviously does not owe itself to the two Java courses I took at community college. When I audited honors analysis at UC Berkeley "for fun" in 2017, I did fine on the midterm. When applying for a new dayjob in 2018, the interviewer, noting my lack of a degree, said he was going to give a version of the interview without a computer science theory question. I insisted on being given the "college" version of the interview, solved a dynamic programming problem, and got the job. And so on.) - -If you can see why uncritically affirming people's current self-image isn't the right solution to "student dysphoria", it _should_ be obvious why the same is true of gender dysphoria. There's a very general underlying principle, that it matters whether someone's current self-image is actually true. - -In an article titled ["Actually, I Was Just Crazy the Whole Time"](https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole), FtMtF detransitioner Michelle Alleva contrasts her beliefs at the time of deciding to transition, with her current beliefs. While transitioning, she accounted for many pieces of evidence about herself ("dislike attention as a female", "obsessive thinking about gender", "didn't fit in with the girls", _&c_.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover everything on the original list: "It's because I'm autistic", "It's because I have unresolved trauma", "It's because women are often treated poorly" ... including "That wasn't entirely true" (!!). - -This is a _rationality_ skill. Alleva had a theory about herself, and then she _revised her theory upon further consideration of the evidence_. Beliefs about one's self aren't special and can—must—be updated using the _same_ methods that you would use to reason about anything else—[just as a recursively self-improving AI would reason the same about transistors "inside" the AI and transitors in "the environment."](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection) - -(Note, I'm specifically praising the _form_ of the inference, not necessarily the conclusion to detransition. If someone else in different circumstances weighed up the evidence about _them_-self, and concluded that they _are_ trans in some _specific_ objective sense on the empirical merits, that would _also_ be exhibiting the skill. For extremely sex-role-nonconforming same-natal-sex-attracted transsexuals, you can at least see why the "born in the wrong body" story makes some sense as a handwavy [first approximation](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/). It's just that for males like me, and separately for females like Michalle Alleva, the story doesn't add up.) - -This also isn't a particularly _advanced_ rationality skill. This is very basic—something novices should grasp during their early steps along the Way. - -Back in 'aught-nine, in the early days of _Less Wrong_, when I still hadn't grown out of [my teenage religion of psychological sex differences denialism](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), there was an exchange in the comment section between me and Yudkowsky that still sticks with me. Yudkowsky had claimed that he had ["never known a man with a true female side, and [...] never known a woman with a true male side, either as authors or in real life."](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2) Offended at our leader's sexism, I passive-aggressively [asked him to elaborate](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way?commentId=AEZaakdcqySmKMJYj), and as part of [his response](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF), he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [_sic_]." - -[I replied](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b) (bolding added): - -> I sometimes wish that certain men would appreciate that not all men are like them—**or at least, that not all men _want_ to be like them—that the fact of masculinity is [not _necessarily_ something to integrate](https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me).** - -_I knew_. Even then, _I knew_ I had to qualify my not liking to be tossed into a Male Bucket. I could object to Yudkowsky speaking as if men were a collective with shared normative ideals ("a lifetime's work to integrate"), but I couldn't claim to somehow not be male, or _even_ that people couldn't make probabilistic predictions about me given the fact that I'm male ("the fact of masculinity"), _because that would be crazy_. The culture of early _Less Wrong_ wouldn't have let me get away with that. - -It would seem that in the current year, that culture is dead—or at least, if it does have any remaining practitioners, they do not include Eliezer Yudkowsky. - -At this point, some people would argue that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post does _also_ explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I _agree_ that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he "really meant" to communicate the reading that does make sense, rather than the one that doesn't make sense? - -I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just _too talented of a writer_ for me to excuse his words as an accidental artifact of unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about, I think it's _deliberately_ ambiguous. When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some other channel than their words straightforwardly reflecting the truth. Someone who was _actually_ stupid wouldn't be able to generate text with a specific balance of insight and selective stupidity fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point of the post is to pander to the biological sex denialists in his robot cult, without technically saying anything unambiguously false that someone could point out as a "lie." - -Consider the implications of Yudkowsky giving as a clue as to the political forces as play in the form of [a disclaimer comment](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228): - -> It unfortunately occurs to me that I must, in cases like these, disclaim that—to the extent there existed sensible opposing arguments against what I have just said—people might be reluctant to speak them in public, in the present social atmosphere. That is, in the logical counterfactual universe where I knew of very strong arguments against freedom of pronouns, I would have probably stayed silent on the issue, as would many other high-profile community members, and only Zack M. Davis would have said anything where you could hear it. -> -> This is a filter affecting your evidence; it has not to my own knowledge filtered out a giant valid counterargument that invalidates this whole post. I would have kept silent in that case, for to speak then would have been dishonest. -> -> Personally, I'm used to operating without the cognitive support of a civilization in controversial domains, and have some confidence in my own ability to independently invent everything important that would be on the other side of the filter and check it myself before speaking. So you know, from having read this, that I checked all the speakable and unspeakable arguments I had thought of, and concluded that this speakable argument would be good on net to publish, as would not be the case if I knew of a stronger but unspeakable counterargument in favor of Gendered Pronouns For Everyone and Asking To Leave The System Is Lying. -> -> But the existence of a wide social filter like that should be kept in mind; to whatever quantitative extent you don't trust your ability plus my ability to think of valid counterarguments that might exist, as a Bayesian you should proportionally update in the direction of the unknown arguments you speculate might have been filtered out. - -So, the explanation of [the problem of political censorship filtering evidence](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) here is great, but the part where Yudkowsky claims "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter" is just _laughable_. My point that _she_ and _he_ have existing meanings that you can't just ignore by fiat given that the existing meanings are _exactly_ what motivate people to ask for new pronouns in the first place is _really obvious_. - -Really, it would be _less_ embarassing for Yudkowsky if he were outright lying about having tried to think of counterarguments. The original post isn't _that_ bad if you assume that Yudkowsky was writing off the cuff, that he clearly just _didn't put any effort whatsoever_ into thinking about why someone might disagree. If he _did_ put in the effort—enough that he felt comfortable bragging about his ability to see the other side of the argument—and _still_ ended up proclaiming his "simplest and best protocol" without even so much as _mentioning_ any of its incredibly obvious costs ... that's just _pathetic_. If Yudkowsky's ability to explore the space of arguments is _that_ bad, why would you trust his opinion about _anything_? - -[TODO: discrediting to the community] - -The disclaimer comment mentions "speakable and unspeakable arguments"—but what, one wonders, is the boundary of the "speakable"? In response to a commenter mentioning the cost of having to remember pronouns as a potential counterargument, Yudkowsky [offers us another clue](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228&reply_comment_id=10159421871809228): - -> People might be able to speak that. A clearer example of a forbidden counterargument would be something like e.g. imagine if there was a pair of experimental studies somehow proving that (a) everybody claiming to experience gender dysphoria was lying, and that (b) they then got more favorable treatment from the rest of society. We wouldn't be able to talk about that. No such study exists to the best of my own knowledge, and in this case we might well hear about it from the other side to whom this is the exact opposite of unspeakable; but that would be an example. - -(As an aside, the wording of "we might well hear about it from _the other side_" (emphasis mine) is _very_ interesting, suggesting that the so-called "rationalist" community, is, effectively, a partisan institution, despite its claims to be about advancing the generically human art of systematically correct reasoning.) - -I think (a) and (b) _as stated_ are clearly false, so "we" (who?) fortunately aren't losing much by allegedly not being able to speak them. But what about some _similar_ hypotheses, that might be similarly unspeakable for similar reasons? - -Instead of (a), consider the claim that (a′) self-reports about gender dysphoria are substantially distorted by [socially-desirable responding tendencies](https://en.wikipedia.org/wiki/Social-desirability_bias)—as a notable and common example, heterosexual males with [sexual fantasies about being female](http://www.annelawrence.com/autogynephilia_&_MtF_typology.html) [often falsely deny or minimize the erotic dimension of their desire to change sex](/papers/blanchard-clemmensen-steiner-social_desirability_response_set_and_systematic_distortion.pdf) (The idea that self-reports can be motivatedly inaccurate without the subject consciously "lying" should not be novel to someone who co-blogged with [Robin Hanson](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain) for years!) - -And instead of (b), consider the claim that (b′) transitioning is socially rewarded within particular _subcultures_ (although not Society as a whole), such that many of the same people wouldn't think of themselves as trans or even gender-dysphoric if they lived in a different subculture. - -I claim that (a′) and (b′) are _overwhelmingly likely to be true_. Can "we" talk about _that_? Are (a′) and (b′) "speakable", or not? We're unlikely to get clarification from Yudkowsky, but based on the Whole Dumb Story I've been telling you about how I wasted the last six years of my life on this, I'm going to _guess_ that the answer is broadly No: no, "we" can't talk about that. (_I_ can say it, and people can debate me in a private Discord server where the general public isn't looking, but it's not something someone of Yudkowsky's stature can afford to acknowledge.) - -But if I'm right that (a′) and (b′) should be live hypotheses and that Yudkowsky would consider them "unspeakable", that means "we" can't talk about what's _actually going on_ with gender dysphoria and transsexuality, which puts the whole discussion in a different light. In another comment, Yudkowsky lists some gender-transition interventions he named the [November 2018 "hill of meaning in defense of validity" Twitter thread](https://twitter.com/ESYudkowsky/status/1067183500216811521)—using a different bathroom, changing one's name, asking for new pronouns, and getting sex reassignment surgery—and notes that none of these are calling oneself a "woman". [He continues](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421986539228&reply_comment_id=10159424960909228): - -> [Calling someone a "woman"] _is_ closer to the right sort of thing _ontologically_ to be true or false. More relevant to the current thread, now that we have a truth-bearing sentence, we can admit of the possibility of using our human superpower of language to _debate_ whether this sentence is indeed true or false, and have people express their nuanced opinions by uttering this sentence, or perhaps a more complicated sentence using a bunch of caveats, or maybe using the original sentence uncaveated to express their belief that this is a bad place for caveats. Policies about who uses what bathroom also have consequences and we can debate the goodness or badness (not truth or falsity) of those policies, and utter sentences to declare our nuanced or non-nuanced position before or after that debate. -> -> Trying to pack all of that into the pronouns you'd have to use in step 1 is the wrong place to pack it. - -Sure, _if we were in the position of designing a constructed language from scratch_ under current social conditions in which a person's "gender" is understood as a contested social construct, rather than their sex being an objective and undisputed fact, then yeah: in that situation _which we are not in_, you definitely wouldn't want to pack sex or gender into pronouns. But it's a disingenuous derailing tactic to grandstand about how people need to alter the semantics of their _already existing_ native language so that we can discuss the real issues under an allegedly superior pronoun convention when, _by your own admission_, you have _no intention whatsoever of discussing the real issues!_ - -(Lest the "by your own admission" clause seem too accusatory, I should note that given constant behavior, admitting it is _much_ better than not-admitting it; so, huge thanks to Yudkowsky for the transparency on this point!) - -Again, as discussed in "Challenges to Yudkowsky's Pronoun Reform Proposal", a comparison to [the _tú_/_usted_ distinction](https://en.wikipedia.org/wiki/Spanish_personal_pronouns#T%C3%BA/vos_and_usted) is instructive. It's one thing to advocate for collapsing the distinction and just settling on one second-person singular pronoun for the Spanish language. That's principled. - -It's quite another thing altogether to _simultaneously_ try to prevent a speaker from using _tú_ to indicate disrespect towards a social superior (on the stated rationale that the _tú_/_usted_ distinction is dumb and shouldn't exist), while _also_ refusing to entertain or address the speaker's arguments explaining _why_ they think their interlocutor is unworthy of the deference that would be implied by _usted_ (because such arguments are "unspeakable" for political reasons). That's just psychologically abusive. - -If Yudkowsky _actually_ possessed (and felt motivated to use) the "ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking", it would be _obvious_ to him that "Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" isn't the hill anyone would care about dying on if it weren't a Schelling point. A lot of TERF-adjacent folk would be _overjoyed_ to concede the (boring, insubstantial) matter of pronouns as a trivial courtesy if it meant getting to _actually_ address their real concerns of "Biological Sex Actually Exists", and ["Biological Sex Cannot Be Changed With Existing or Foreseeable Technology"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and "Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity." The reason so many of them are inclined to stand their ground and not even offer the trivial courtesy is because they suspect, correctly, that the matter of pronouns is being used as a rhetorical wedge to try to prevent people from talking or thinking about sex. - -Having analyzed the _ways_ in which Yudkowsky is playing dumb here, what's still not entirely clear is _why_. Presumably he cares about maintaining his credibility as an insightful and fair-minded thinker. Why tarnish that by putting on this haughty performance? - -Of course, presumably he _doesn't_ think he's tarnishing it—but why not? [He graciously explains in the Facebook comments](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228&reply_comment_id=10159421901809228): - -> it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment [...] I think people are better off at the end of that. - -Ah, _prudence_! He continues: - -> I don't see what the alternative is besides getting shot, or utter silence about everything Stalin has expressed an opinion on including "2 + 2 = 4" because if that logically counterfactually were wrong you would not be able to express an opposing opinion. - -The problem with trying to "exhibit generally rationalist principles" in an line of argument that you're constructing in order to be prudent and not community-harmful, is that you're thereby necessarily _not_ exhibiting the central rationalist principle that what matters is the process that _determines_ your conclusion, not the reasoning you present to _reach_ your conclusion, after the fact. - -The best explanation of this I know of was authored by Yudkowsky himself in 2007, in a post titled ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument). It's worth quoting at length. The Yudkowsky of 2007 invites us to consider the plight of a political campaign manager: - -> As a campaign manager reading a book on rationality, one question lies foremost on your mind: "How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?" -> -> Sorry. It can't be done. -> -> "What?" you cry. "But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes's Rule?" -> -> Sorry. It still can't be done. You defeated yourself the instant you specified your argument's conclusion in advance. - -The campaign manager is in possession of a survey of mayoral candidates on which Snodgrass compares favorably to other candidates, except for one question. The post continues (bolding mine): - -> So you are tempted to publish the questionnaire as part of your own campaign literature ... with the 11th question omitted, of course. -> -> **Which crosses the line between _rationality_ and _rationalization_.** It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence. -> -> Indeed, **you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it.** "What!" you cry. "A campaign should publish facts unfavorable to their candidate?" But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn't, if you were genuinely curious. If you were flowing _forward_ from the evidence to an unknown choice of candidate, rather than flowing _backward_ from a fixed candidate to determine the arguments. - -The post then briefly discusses the idea of a "logical" argument, one whose conclusions follow from its premises. "All rectangles are quadrilaterals; all squares are quadrilaterals; therefore, all squares are rectangles" is given as an example of _illogical_ argument, even though both premises are true (all rectangles and squares are in fact quadrilaterals) _and_ the conclusion is true (all squares are in fact rectangles). The problem is that the conclusion doesn't _follow_ from the premises; the _reason_ all squares are rectangles isn't _because_ they're both quadrilaterals. If we accepted arguments of the general _form_ "all A are C; all B are C; therefore all A are B", we would end up believing nonsense. - -Yudkowsky's conception of a "rational" argument—at least, Yudkowsky's conception in 2007, which the Yudkowsky of the current year seems to disagree with—has a similar flavor: the stated reasons should be the actual reasons. The post concludes: - -> If you really want to present an honest, rational argument _for your candidate_, in a political campaign, there is only one way to do it: -> -> * _Before anyone hires you_, gather up all the evidence you can about the different candidates. -> * Make a checklist which you, yourself, will use to decide which candidate seems best. -> * Process the checklist. -> * Go to the winning candidate. -> * Offer to become their campaign manager. -> * When they ask for campaign literature, print out your checklist. -> -> Only in this way can you offer a _rational_ chain of argument, one whose bottom line was written flowing _forward_ from the lines above it. Whatever _actually_ decides your bottom line is the only thing you can _honestly_ write on the lines above. - -I remember this being pretty shocking to read back in 'aught-seven. What an alien mindset! But it's _correct_. You can't rationally argue "for" a chosen conclusion, because only the process you use to _decide what to argue for_ can be your real reason. - -This is a shockingly high standard for anyone to aspire to live up to—but what made Yudkowsky's Sequences so life-changingly valuable, was that they articulated the _existence_ of such a standard. For that, I will always be grateful. - -... which is why it's so _bizarre_ that the Yudkowsky of the current year acts like he's never heard of it! If your _actual_ bottom line is that it is sometimes personally prudent and not community-harmful to post your agreement with Stalin, then sure, you can _totally_ find something you agree with to write on the lines above! Probably something that "exhibits generally rationalist principles", even! It's just that any rationalist who sees the game you're playing is going to correctly identify you as a _partisan hack_ on this topic and take that into account when deciding whether they can trust you on other topics. - -"I don't see what the alternative is besides getting shot," Yudkowsky muses (where presumably, 'getting shot' is a metaphor for a large negative utility, like being unpopular with progressives). Yes, an astute observation! And _any other partisan hack could say exactly the same_, for the same reason. Why does the campaign manager withhold the results of the 11th question? Because he doesn't see what the alternative is besides getting shot. - -Yudkowsky [sometimes](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings) [quotes](https://twitter.com/ESYudkowsky/status/1456002060084600832) _Calvin and Hobbes_: "I don't know which is worse, that everyone has his price, or that the price is always so low." If the idea of being fired from the Snodgrass campaign or being unpopular with progressives is _so_ terrifying to you that it seems analogous to getting shot, then, if those are really your true values, then sure—say whatever you need to say to keep your job and your popularity, as is personally prudent. You've set your price. But if the price you put on the intellectual integrity of your so-called "rationalist" community is similar to that of the Snodgrass for Mayor campaign, you shouldn't be surprised if intelligent, discerning people accord similar levels of credibility to the two groups' output. - -I see the phrase "bad faith" thrown around more than I think people know what it means. "Bad faith" doesn't mean "with ill intent", and it's more specific than "dishonest": it's [adopting the surface appearance of being moved by one set of motivations, while actually acting from another](https://en.wikipedia.org/wiki/Bad_faith). - -For example, an [insurance company employee](https://en.wikipedia.org/wiki/Claims_adjuster) who goes through the motions of investigating your claim while privately intending to deny it might never consciously tell an explicit "lie", but is definitely acting in bad faith: they're asking you questions, demanding evidence, _&c._ in order to _make it look like_ you'll get paid if you prove the loss occurred—whereas in reality, you're just not going to be paid. Your responses to the claim inspector aren't completely casually _inert_: if you can make an extremely strong case that the loss occurred as you say, then the claim inspector might need to put some effort into coming up with some ingenious excuse to deny your claim, in ways that exhibit general claim-inspection principles. But at the end of the day, the inspector is going to say what they need to say in order to protect the company's loss ratio, as is sometimes personally prudent. - -With this understanding of bad faith, we can read Yudkowsky's "it is sometimes personally prudent [...]" comment as admitting that his behavior on politically-charged topics is in bad faith—where "bad faith" isn't a meaningless insult, but [literally refers](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/) to the pretending-to-have-one-set-of-motivations-while-acting-according-to-another behavior, such that accusations of bad faith can be true or false. Yudkowsky will take care not to consciously tell an explicit "lie", while going through the motions to _make it look like_ he's genuinely engaging with questions where I need the right answers in order to make extremely impactful social and medical decisions—whereas in reality, he's only going to address a selected subset of the relevant evidence and arguments that won't get him in trouble with progressives. - -To his credit, he _will_ admit that he's only willing to address a selected subset of arguments—but while doing so, he claims an absurd "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking" while _simultaneously_ blatantly mischaracterizing his opponents' beliefs! ("Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" doesn't pass anyone's [ideological Turing test](https://www.econlib.org/archives/2011/06/the_ideological.html).) - -Counterarguments aren't completely causally _inert_: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Self-Declared Gender Identity, Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But at the end of the day, Yudkowsky is going to say what he needs to say in order to protect his reputation, as is sometimes personally prudent. - -Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior, not a contentless attack—maybe there are some circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I occasionally engage in a bit of what could be called ["concern trolling"](https://geekfeminism.fandom.com/wiki/Concern_troll): I take care to word my replies in a way that makes it look like I'm more ideologically aligned with my interlocutor than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the _minimal_ amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior as acceptable—there _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my personal alignment. - -That is, my bad faith concern-trolling gambit of deceiving people about my ideological alignment in the hopes of improving the discussion seems like something that makes our collective beliefs about the topic-being-argued-about _more_ accurate. (And the topic-being-argued-about is presumably of greater collective interest than which "side" I personally happen to be on.) - -In contrast, the "it is sometimes personally prudent [...] to post your agreement with Stalin" gambit is the exact reverse: it's _introducing_ a distortion into the discussion in the hopes of correcting people's beliefs about the speaker's ideological alignment. (Yudkowsky is not a right-wing Bad Guy, but people would tar him as a right-wing Bad Guy if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic-being-argued about; it's a _pure_ ass-covering move. - -Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. But the _reason_ censorship is such an effective tool in the hands of dictators like Stalin is because it ensures that many people _don't_ know—and that those who know (or suspect) don't have [game-theoretic common knowledge](https://www.lesswrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge#Dictators_and_freedom_of_speech) that others do too. - -Zvi Mowshowitz has [written about how the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is typically used justify deception: if "everybody knows" that we can't talk about biological sex (the rationalization goes), then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex when it would otherwise be extremely relevant. - -But if it were _actually_ the case that everybody knew (and everybody knew that everybody knew), then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the entire appeal and purpose of censorship is precisely that _not_ everybody knows and that someone with power wants to _keep_ it that way. - -For the savvy people in the know, it would certainly be _convenient_ if everyone secretly knew: then the savvy people wouldn't have to face the tough choice between acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath). - -Policy debates should not appear one-sided. Faced with this kind of dilemma, I can't say that defying Power is necessarily the right choice: if there really _were_ no other options between deceiving your readers with a bad faith performance, and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe deceiving your readers with a bad faith performance is the right thing to do. - -But if you _actually cared_ about not deceiving your readers, you would want to be _really sure_ that those _really were_ the only two options. You'd [spend five minutes by the clock looking for third alternatives](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative)—including, possibly, not issuing proclamations on your honor as leader of the so-called "rationalist" community on topics where you _explicitly intend to ignore counteraguments on grounds of their being politically unfavorable_. Yudkowsky rejects this alternative on the grounds that it allegedly implies "utter silence about everything Stalin has expressed an opinion on including '2 + 2 = 4' because if that logically counterfactually were wrong you would not be able to express an opposing opinion", but this seems like yet another instance of Yudkowsky motivatedly playing dumb: _if he wanted to_, I'm sure Eliezer Yudkowsky could think of _some relevant differences_ between "2 + 2 = 4" (a trivial fact of arithmetic) and "the simplest and best protocol is, "'He' refers to the set of people who have asked us to use 'he'" (a complex policy proposal whose numerous flaws I have analyzed in detail). - -"[P]eople do _know_ they're living in a half-Stalinist environment," Yudkowsky says. "I think people are better off at the end of that," he says. But who are "people", specifically? One of the problems with utilitarianism is that it doesn't interact well with game theory. If a policy makes most people better off, at the cost of throwing a few others under the bus, is it the right thing to do? Depending on the details, maybe! But you probably shouldn't expect the victims to meekly go under the wheels without a fight. That's why I'm telling you this 50,000-word sob story about how _I_ didn't know, and _I'm_ not better off. - -In [one of Yudkowsky's roleplaying fiction threads](https://www.glowfic.com/posts/4508), Thellim, a woman hailing from [a saner alternate version of Earth called dath ilan](https://www.lesswrong.com/tag/dath-ilan), [expresses horror and disgust at how shallow and superficial the characters in Jane Austen's _Pride and Prejudice_ are, in contrast to what a human being _should_ be](https://www.glowfic.com/replies/1592898#reply-1592898): - -> [...] the author has made zero attempt to even try to depict Earthlings as having reflection, self-observation, a fire of inner life; most characters in _Pride and Prejudice_ bear the same relationship to human minds as a stick figure bears to a photograph. People, among other things, have the property of trying to be people; the characters in Pride and Prejudice have no visible such aspiration. Real people have concepts of their own minds, and contemplate their prior ideas of themselves in relation to a continually observed flow of their actual thoughts, and try to improve both their self-models and their selves. It's impossible to imagine any of these people, even Elizabeth, as doing that thing Thellim did a few hours ago, where she noticed she was behaving like Verrez and snapped out of it. Just like any particular Verrez always learns to notice he is being Verrez and snap out of it, by the end of any of his alts' novels. - -When someone else doesn't see the problem with Jane Austen's characters, Thellim [redoubles her determination to explain the problem](https://www.glowfic.com/replies/1592987#reply-1592987): "_She is not giving up that easily. Not on an entire planet full of people._" - -Thellim's horror at the fictional world of Jane Austen is basically how I feel about "trans" culture in the current year. It _actively discourages self-modeling!_ People who have cross-sex fantasies are encouraged to reify them into a gender identity which everyone else is supposed to unquestioningly accept. Obvious critical questions about what's actually going on etiologically, what it means for an identity to be true, _&c._ are strongly discouraged as hateful, hurtful, distressing, _&c._ - -The problem is _not_ that I think there's anything wrong with having cross-sex fantasies, and wanting the fantasy to become real—just as Thellim's problem with _Pride and Prejudice_ is not there being anything wrong with wanting to marry a suitable bachelor. These are perfectly respectable goals. - -The _problem_ is that people who are trying to be people, people who are trying to acheive their goals _in reality_, do so in a way that involves having concepts of their own minds, and trying to improve both their self-models and their selves—and that's _not possible_ in a culture that tries to ban, as heresy, the idea that it's possible for someone's self-model to be wrong. - -A trans woman I follow on Twitter complained that a receptionist at her workplace said she looked like some male celebrity. "I'm so mad," she fumed. "I look like this right now"—there was a photo attached to the Tweet—"how could anyone ever think that was an okay thing to say?" - -It _is_ genuinely sad that the author of those Tweets didn't get perceived the way she would prefer! But the thing I want her to understand, a thing I think any sane adult should understand— - -_It was a compliment!_ That receptionist was almost certainly thinking of [David Bowie](https://en.wikipedia.org/wiki/David_Bowie) or [Eddie Izzard](https://en.wikipedia.org/wiki/Eddie_Izzard), rather than being hateful and trying to hurt. - -The author should have graciously accepted the compliment, and _done something to pass better next time_. The horror of trans culture is that it's impossible to imagine any of these people doing that—of noticing that they're behaving like a TERF's hostile stereotype of a narcissistic, gaslighting trans-identified male and snapping out of it. - -I want a shared cultural understanding that the _correct_ way to ameliorate the genuine sadness of people not being perceived the way they prefer is through things like _better and cheaper facial feminization surgery_, not _[emotionally blackmailing](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) people out of their ability to report what they see_. I don't _want_ to reliniqush [my ability to notice what women's faces look like](/papers/bruce_et_al-sex_discrimination_how_do_we_tell.pdf), even if that means noticing that mine isn't; if I'm sad that it isn't, I can endure the sadness if the alternative is _forcing everyone in my life to doublethink around their perceptions of me_. - -In a world where surgery is expensive, but some people desperately want to change sex and other people want to be nice to them, there's an incentive gradient in the direction of re-binding our shared concept of "gender" onto things like [ornamental clothing](http://thetranswidow.com/2021/02/18/womens-clothing-is-always-drag-even-on-women/) that are easier to change than secondary sex characteristics. - -But I would have expected people with the barest inkling of self-awareness and honesty to ... notice the incentives, and notice the problems being created by the incentives, and to talk about the problems in public so that we can coordinate on the best solution, [whatever that turns out to be](/2021/Sep/i-dont-do-policy/)? - -And if that's too much to expect of the general public— - -And if it's too much to expect garden-variety "rationalists" to figure out on their own without prompting from their superiors— - -Then I would have at least expected Eliezer Yudkowsky to take actions _in favor of_ rather than _against_ his faithful students having these very basic capabilities for reflection, self-observation, and ... _speech_? I would have expected Eliezer Yudkowsky to not _actively exert optimization pressure in the direction of transforming me into a Jane Austen character_. - -This is the part where Yudkowsky or his flunkies accuse me of being uncharitable, of failing at perspective-taking. Obviously, Yudkowsky doesn't _think of himself_ as trying to transform his faithful students into Jane Austen characters. One might ask if it does not therefore follow that I have failed to understand his position? [As Yudkowsky put it](https://twitter.com/ESYudkowsky/status/1435618825198731270): - -> The Other's theory of themselves usually does not make them look terrible. And you will not have much luck just yelling at them about how they must really be doing `terrible_thing` instead. - -But the substance of my accusations is not about Yudkowsky's _conscious subjective narrative_. I don't have a lot of uncertainty about Yudkowsky's _theory of himself_, because he told us that, very clearly: "it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment." I don't doubt that that's [how the algorithm feels from the inside](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside). - -But my complaint is about the work the algorithm is _doing_ in Stalin's service, not about how it _feels_; I'm talking about a pattern of _publicly visible behavior_ stretching over years. (Thus, "take actions" in favor of/against, rather than "be"; "exert optimization pressure in the direction of", rather than "try".) I agree that everyone has a story in which they don't look terrible, and that people mostly believe their own stories, but _it does not therefore follow_ that no one ever does anything terrible. - -I agree that you won't have much luck yelling at the Other about how they must really be doing `terrible_thing`. (People get very invested in their own stories.) But if you have the _receipts_ of the Other repeatedly doing `terrible_thing` in public over a period of years, maybe yelling about it to _everyone else_ might help _them_ stop getting suckered by the Other's fraudulent story. - -Let's recap. - -[TODO: recap— -* in 2009, "Changing Emotions" -* in 2016, "20% of the ones with penises" -* ... -] - -I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. - -_After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female counterpart" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. - -At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. - -On "his turn", he comes up with some pompous proclamation that's very obviously optimized to make the "pro-trans" faction look smart and good and make the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles." - -On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying making any unambiguously "false" atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them. - -In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons. - -Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not use any of a whole list of banned happiness-producing drugs, it might switch to researching new drugs, or just _pay_ humans to take heroin, _&c._ - -It's the same thing with Yudkowsky's political-risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that [that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out [that's not what's going on](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of _something_—but at this point, _I have no reason to care_. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a _waste of everyone's time_; trying to inform you isn't [his bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line). - -Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith. - -Accordingly, I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be _allowed to notice_ the nearest-unblocked-strategy game which is _very obviously happening_ if you look at the history of what was said. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level _and_ the meta level after which there's nothing left for me to do but jump up to the meta-meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence for the assumption of good faith to have been _empirically falsified_. - -(Obviously, if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a _pretty good_ job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly miscalibratedly arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate.) - -What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing technology because the category encompasses so many high-dimensional details? Not original to me! I [filled in a few technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries compress your anticipated experiences? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) - -Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say, are things he _already said_, that anyone could just look up! - -I guess the point is that the egregore doesn't have the logical or reading comprehension for that?—or rather the egregore has no reason to care about the past; if you get tagged by the mob as an Enemy, your past statements will get dug up as evidence of foul present intent, but if you're doing good enough of playing the part today, no one cares what you said in 2009? - -Does ... does he expect the rest of us not to _notice_? Or does he think that "everybody knows"? - -But I don't, think that everybody knows. And I'm not, giving up that easily. Not on an entire subculture full of people. - -Yudkowsky [defends his behavior](https://twitter.com/ESYudkowsky/status/1356812143849394176): - -> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. - -[TODO: first of all, "A Rational Arugment" is very explicit about "not have been as willing to Tweet a truth helping the side" meaning you've crossed the line; second of all, it's if anything more plausible that trans women will matter to AGI, as I pointed out in my email] - -But the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" _vs._ "anti-trans". (The central tendency of the contemporary trans rights movement is firmly on the Wrong Side, but that's not the same thing as all trans people as individuals.) That's why Jessica joined our posse to try to argue with Yudkowsky in early 2019. (She wouldn't have, if my objection had been, "trans is fake; trans people Bad".) That's why Somni—one of the trans women who [infamously protested the 2019 CfAR reunion](https://www.ksro.com/2019/11/18/new-details-in-arrests-of-masked-camp-meeker-protesters/) for (among other things) CfAR allegedly discriminating against trans women—[understands what I've been saying](https://somnilogical.tumblr.com/post/189782657699/legally-blind). - -The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently stated by Scott Alexander (redacting the irrelevant object-level example): - -> I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should. - -This is a battle between Feelings and Truth, between Politics and Truth. - -In order to take the side of Truth, you need to be able to tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him). You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him). And you need to be able to say, in public, that trans women are male and trans men are female _with respect to_ a female/male "sex" concept that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide). - -If you don't want to say those things because hurting people is wrong, then you have chosen Feelings. - -Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only acting in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. - -Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. - - -[TODO— finish Yudkowsky trying to be a religious leader -Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. - -If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_. - -] - -[TODO section existential stakes, cooperation] - -> [_Perhaps_, replied the cold logic](https://www.yudkowsky.net/other/fiction/the-sword-of-good). _If the world were at stake._ -> -> _Perhaps_, echoed the other part of himself, _but that is not what was actually happening._ - -[TODO: social justice and defying threats - -at least Sabbatai Zevi had an excuse: his choices were to convert to Islam or be impaled https://en.wikipedia.org/wiki/Sabbatai_Zevi#Conversion_to_Islam -] - - -I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_. - -I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)). I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4. - -But if he's _then_ going to take a shit on c3 of my chessboard (["In terms of important things? Those would be all the things I've read [...] describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), - - - - -The turd on c3 is a pretty big likelihood ratio! - - - - - -[TODO: the dolphin war, our thoughts about dolphins are literally downstream from Scott's political incentives in 2014; this is a sign that we're a cult - -https://twitter.com/ESYudkowsky/status/1404700330927923206 -> That is: there's a story here where not just particular people hounding Zack as a responsive target, but a whole larger group, are engaged in a dark conspiracy that is all about doing damage on issues legible to Zack and important to Zack. This is merely implausible on priors. - -I mean, I wouldn't _call_ it a "dark conspiracy" exactly, but if the people with intellectual authority are computing what to say on the principle of "it is sometimes personally prudent and not community-harmful to post [their] agreement with Stalin", and Stalin cares a lot about doing damage on issues legible and important to me, then, pragmatically, I think that has _similar effects_ on the state of our collective knowledge as a dark conspiracy, even if the mechanism of coordination is each individual being separately terrified of Stalin, rather than them meeting with dark robes to plot under a full moon. - -[when you consider the contrast between how Yudkowsky talks about sex differences, and how he panders to trans people—that really does look like he's participating in a conspiracy to do damage on issues legible to me; if there's no conspiracy, how else am I supposed to explain the difference?] - -] - -[TODO: sneering at post-rats; David Xu interprets criticism of Eliezer as me going "full post-rat"?! 6 September 2021 - -> Also: speaking as someone who's read and enjoyed your LW content, I do hope this isn't a sign that you're going full post-rat. It was bad enough when QC did it (though to his credit QC still has pretty decent Twitter takes, unlike most post-rats). - -https://twitter.com/davidxu90/status/1435106339550740482 -] - - -David Xu writes (with Yudkowsky ["endors[ing] everything [Xu] just said"](https://twitter.com/ESYudkowsky/status/1436025983522381827)): - -> I'm curious what might count for you as a crux about this; candidate cruxes I could imagine include: whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is "rational" to rule that such inferences should be avoided when possible, and if so, whether the best way to disallow a large set of potential inferences is [to] proscribe the use of the categories that facilitate them—and if _not_, whether proscribing the use of a category in _public communication_ constitutes "proscribing" it more generally, in a way that interferes with one's ability to perform "rational" thinking in the privacy of one's own mind. -> -> That's four possible (serial) cruxes I listed, one corresponding to each "whether". - -I reply: on the first and second cruxes, concerning whether some categories facilitate inferences that cause more harm than benefit on the whole and whether they should be avoided when possible, I ask: harm _to whom?_ Not all agents have the same utility function! If some people are harmed by other people making certain probabilistic inferences, then it would seem that there's a _conflict_ between the people harmed (who prefer that such inferences be avoided if possible), and people who want to make and share probabilistic inferences about reality (who think that that which can be destroyed by the truth, should be). - -On the third crux, whether the best way to disallow a large set of potential inferences is to proscribe the use of the categories that facilitate them: well, it's hard to be sure whether it's the _best_ way: no doubt a more powerful intelligence could search over a larger space of possible strategies than me. But yeah, if your goal is to _prevent people from noticing facts about reality_, then preventing them from using words that refer those facts seems like a pretty effective way to do it! - -On the fourth crux, whether proscribing the use of a category in public communication constitutes "proscribing" in a way that interferes with one's ability to think in the privacy of one's own mind: I think this is mostly true for humans. We're social animals. To the extent that we can do higher-grade cognition at all, we do it using our language faculties that are designed for communicating with others. How are you supposed to think about things that you don't have words for? - -Xu continues: - -> I could have included a fifth and final crux about whether, even _if_ The Thing In Question interfered with rational thinking, that might be worth it; but this I suspect you would not concede, and (being a rationalist) it's not something I'm willing to concede myself, so it's not a crux in a meaningful sense between us (or any two self-proclaimed "rationalists"). -> -> My sense is that you have (thus far, in the parts of the public discussion I've had the opportunity to witness) been behaving as though the _one and only crux in play_—that is, the True Source of Disagreement—has been the fifth crux, the thing I refused to include with the others of its kind. Your accusations against the caliphate _only make sense_ if you believe the dividing line between your behavior and theirs is caused by a disagreement as to whether "rational" thinking is "worth it"; as opposed to, say, what kind of prescriptions "rational" thinking entails, and which (if any) of those prescriptions are violated by using a notion of gender (in public, where you do not know in advance who will receive your communications) that does not cause massive psychological damage to some subset of people. -> -> Perhaps it is your argument that all four of the initial cruxes I listed are false; but even if you believe that, it should be within your set of ponderable hypotheses that people might disagree with you about that, and that they might perceive the disagreement to be _about_ that, rather than (say) about whether subscribing to the Blue Tribe view of gender makes them a Bad Rationalist, but That's Okay because it's Politically Convenient. -> -> This is the sense in which I suspect you are coming across as failing to properly Other-model. - -After everything I've been through over the past six years, I'm inclined to think it's not a "disagreement" at all. - -It's a _conflict_. I think what's actually at issue is that, at least in this domain, I want people to tell the truth, and the Caliphate wants people to not tell the truth. This isn't a disagreement about rationality, because telling the truth _isn't_ rational _if you don't want people to know things_. - -At this point, I imagine defenders of the Caliphate are shaking their heads in disappointment at how I'm doubling down on refusing to Other-model. But—_am_ I? Isn't this just a re-statement of Xu's first proposed crux, except reframed as a "values difference" rather than a "disagreement"? - -Is the problem that my use of the phrase "tell the truth" (which has positive valence in our culture) functions to sneak in normative connotations favoring "my side"? - -Fine. Objection sustained. I'm happy to use to Xu's language: I think what's actually at issue is that, at least in this domain, I want to facilitate people making inferences (full stop), and the Caliphate wants to _not_ facilitate people making inferences that, on the whole, cause more harm than benefit. This isn't a disagreement about rationality, because facilitating inferences _isn't_ rational _if you don't want people to make inferences_ (for example, because they cause more harm than benefit). - -Better? Perhaps, to some 2022-era rats and EAs, this formulation makes my position look obviously in the wrong: I'm saying that I'm fine with my inferences _causing more harm than benefit_ (!). Isn't that monstrous of me? Why would someone do that? - -One of the better explanations of this that I know of was (again, as usual) authored by Yudkowsky in 2007, in a post titled ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased). - -The Yudkowsky of 2007 starts by quoting a passage from George Orwell's _1984_, in which O'Brien (a loyal member of the ruling Party in the totalitarian state depicted in the novel) burns a photograph of Jones, Aaronson, and Rutherford (former Party leaders whose existence has been censored from the historical record). Immediately after burning the photograph, O'Brien denies that it ever existed. - -The Yudkowsky of 2007 continues—it's again worth quoting at length— - -> What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—_unhappy?_ Surely, _true_ wisdom would be _second-order_ rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness. -> -> Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen. -> -> [...] -> -> For second-order rationality to be genuinely _rational_, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality. If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting. I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads. -> -> You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception. -> -> The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity. -> -> [...] -> -> One of chief pieces of advice I give to aspiring rationalists is "Don't try to be clever." And, "Listen to those quiet, nagging doubts." If you don't know, you don't know _what_ you don't know, you don't know how _much_ you don't know, and you don't know how much you _needed_ to know. -> -> There is no second-order rationality. There is only a blind leap into what may or may not be a flaming lava pit. Once you _know_, it will be too late for blindness. - -Looking back on this from 2022, the only criticism I have is that Yudkowsky was too optimistic to "doubt such a lunatic dislocation in the mind could really happen." In some ways, people's actual behavior is _worse_ than what Orwell depicted. The Party of Orwell's _1984_ covers its tracks: O'Brien takes care to burn the photograph _before_ denying memory of it, because it would be _too_ absurd for him to act like the photo had never existed while it was still right there in front of him. - -In contrast, Yudkowsky's Caliphate of the current year _doesn't even bother covering its tracks_. Turns out, it doesn't need to! People just don't remember things! - -The [flexibility of natural language is a _huge_ help here](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). If the caliph were to _directly_ contradict himself in simple, unambiguous language—to go from "Oceania is not at war with Eastasia" to "Oceania is at war with Eastasia" without any acknowledgement that anything had changed—_then_ too many people might notice that those two sentences are the same except that one has the word _not_ in it. What's a caliph to do, if he wants to declare war on Eastasia without acknowledging or taking responsibility for the decision to do so? - -The solution is simple: just—use more words! Then if someone tries to argue that you've _effectively_ contradicted yourself, accuse them of being uncharitable and failing to model the Other. You can't lose! Anything can be consistent with anything if you apply a sufficiently charitable reading; whether Oceania is at war with Eastasia depends on how you choose to draw the category boundaries of "at war." - -Thus, O'Brien should envy Yudkowsky: burning the photograph turns out to be unnecessary! ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) is _still up_ and not retracted, but that didn't stop the Yudkowsky of 2016 from pivoting to ["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) when that became a politically favorable thing to say. I claim that these posts _effectively_ contradict each other. The former explains why men who fantasize about being women are _not only_ out of luck given forseeable technology, but _also_ that their desires may not even be coherent (!), whereas the latter claims that men who wish they were women may, in fact, _already_ be women in some unspecified psychological sense. - -_Technically_, these don't _strictly_ contradict each other: I can't point to a sentence from each that are the same except one includes the word _not_. (And even if there were such sentences, I wouldn't be able to prove that the other words were being used in the same sense in both sentences.) One _could_ try to argue that "Changing Emotions" is addressing cis men with a weird sex-change fantasy, whereas the "ones with penises are actually women" claim was about trans women, which are a different thing. - -_Realistically_ ... no. These two posts _can't_ both be right. In itself, this isn't a problem: people change their minds sometimes, which is great! But when people _actually_ change their minds (as opposed to merely changing what they say in public for political reasons), you expect them to be able to _acknowledge_ the change, and hopefully explain what new evidence or reasoning brought them around. If they can't even _acknowledge the change_, that's pretty Orwellian, like O'Brien trying to claim that the photograph is of different men who just coincidentally happen to look like Jones, Aaronson, and Rutherford. - -And if a little bit of Orwellianism on specific, narrow, highly-charged topics might be forgiven—because everyone else in your Society is doing it, and you would be punished for not playing along, an [inadequate equilibrium](https://equilibriabook.com/) that no one actor has the power to defy—might we not expect the father of the "rationalists" to stand his ground on the core theses of his ideology, like whether telling the truth is good? - -I guess not! ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased) is _still up_ and not retracted, but that didn't stop Yudkowsky from [endorsing everything Xu said](https://twitter.com/ESYudkowsky/status/1436025983522381827) about "whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is 'rational' to rule that such inferences should be avoided when possible" being different cruxes than "whether 'rational' thinking is 'worth it'". - -I don't doubt Yudkowsky could come up with some clever casuistry why, _technically_, the text he wrote in 2007 and the text he endorsed in 2021 don't contradict each other. But _realistically_ ... again, no. - -[TODO: elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading and consider, not whether individual sentences can be interpreted as "true", but what kind of _optimization_ the text is doing to the behavior of receptive readers] - -On the offhand chance that Eliezer Yudkowsky happens to be reading this—if someone _he_ trusts (MIRI employees?) genuinely thinks it would be good for the lightcone to bring this paragraph to his attention—he should know that if he _wanted_ to win back _some_ of the trust and respect he's lost from me and everyone I can influence—not _all_ of it, but _some_ of it[^some-of-it]—I think it would be really easy. All he would have to do is come clean about the things he's _already_ misled people about. - -[^some-of-it]: Coming clean _after_ someone writes a 70,000 word memoir explaining how dishonest you've been, engenders less trust than coming clean spontenously of your own accord. - -I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's _clearly labeled as such_ would be fine. (There's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive; if you want other perspectives, you'll have to read other authors and think it through for yourself.") - -What's _not_ fine is selective argumentation while claiming "confidence in [your] own ability to independently invent everything important that would be on the other side of the filter and check it [yourself] before speaking" when you _very obviously have done no such thing_. That's _not_ "no commment"! Having _already_ chosen to comment, he can't reasonably expect any self-respecting rationalist to take his "epistemic hero" bluster seriously if he's not going to reply to the obvious objections to have been made with standing and warrant. To wit— - - * Yudkowsky is _on the record_ [claiming that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix", he is "over 50% probability at this point that at least 20% of the ones with penises are actually women". What ... does that mean? What is the _truth condition_ of the word 'woman' in that sentence? This can't be the claim that 20% of males would benefit from a gender transition, and in that sense _become_ "transsexual women"; the claim stated in the post is that members of this group are _already_ "actually women", "female-minds-in-male-bodies". How does Yudkowsky reconcile this claim with the perponderance of male-typical rather than female-typical behavior in this group (_e.g._, in gynephilic sexual orientation, or [in vocational interests](/2020/Nov/survey-data-on-cis-and-trans-women-among-haskell-programmers/))? On the other hand, if Yudkowsky changed his mind and no longer believes that 20% of Bay Area males of European descent have female brains, can he state that for the public record? _Reply!_ - - * Yudkowsky is _on the record_ [claiming that](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421986539228&reply_comment_id=10159423713134228) he "do[es] not know what it feels like from the inside to feel like a pronoun is attached to something in your head much more firmly than 'doesn't look like an Oliver' is attached to something in your head." As I explained in "Challenges to Yudkowsky's Pronoun Reform Proposal", [quoting examples from Yudkowsky's published writing in which he treated sex and pronouns as synonymous just as one would expect a native American English speaker born in 1979 to do](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/#look-like-an-oliver), this self-report is not plausible. The claim may not have been a "lie" _in the sense_ of Yudkowsky consciously harboring deliberative intent to deceive at the time he typed that sentence, but it _is_ a "lie" in the sense that the claim is _false_ and Yudkowsky _knows_ it's false (although its falsehood may not have been salient in the moment of typing the sentence). If Yudkowsky expects people to believe that he never lies, perhaps he could correct this accidental lie after it's been pointed out? _Reply!_ - - * In a comment on his February 2021 Facebook post on pronoun reform, Yudkowsky [claims that](https://www.facebook.com/yudkowsky/posts/pfbid0331sBqRLBrDBM2Se5sf94JurGRTCjhbmrYnKcR4zHSSgghFALLKCdsG6aFbVF9dy9l?comment_id=10159421833274228&reply_comment_id=10159421901809228) "in a half-Kolmogorov-Option environment where [...] you can get away with attaching explicit disclaimers like this one, it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment". Some interesting potential counterevidence to this "not community-harmful" claim comes in the form of [a highly-upvoted (110 karma at press time) comment by _Less Wrong_ administrator Oliver Habryka](https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY) on the _Less Wrong_ mirror of my rebuttal. Habryka writes: - -> [...] basically everything in this post strikes me as "obviously true" and I had a very similar reaction to what the OP says now, when I first encountered the Eliezer Facebook post that this post is responding to. -> -> And I do think that response mattered for my relationship to the rationality community. I did really feel like at the time that Eliezer was trying to make my map of the world worse, and it shifted my epistemic risk assessment of being part of the community from "I feel pretty confident in trusting my community leadership to maintain epistemic coherence in the presence of adversarial epistemic forces" to "well, I sure have to at least do a lot of straussian reading if I want to understand what people actually believe, and should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false in order to not have to deal with complicated political issues". -> -> I do think that was the right update to make, and was overdetermined for many different reasons, though it still deeply saddens me. - -Again, that's the administrator of Yudkowsky's _own website_ saying that he's deeply saddened that he now expects Yudkowsky to _make up sophisticated stories for why pretty obviously true things are false_ (!!). Is that ... _not_ a form of harm to the community? If that's not community-harmful in Yudkowsky's view, then what would be example of something that _would_ be? _Reply, motherfucker!_ - -... but I'm not, holding my breath. If Yudkowsky _wants_ to reply—if he _wants_ to try to win back some of the trust and respect he's lost from me—he's totally _welcome_ to. (_I_ don't censor my comment sections of people whom it "looks like it would be unhedonic to spend time interacting with".) - - -[TODO: I've given up talking to the guy (nearest unblocked strategy sniping in Eliezerfic doesn't count), my last email, giving up on hero-worship I don't want to waste any more of his time. I owe him that much.] - -[TODO: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe ] - -[TODO: the Death With Dignity era - -"Death With Dignity" isn't really an update; he used to refuse to give a probability, and now he says the probability is ~0 - -/2017/Jan/from-what-ive-tasted-of-desire/ - -] - -[TODO: regrets and wasted time] diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md new file mode 100644 index 0000000..ca2f076 --- /dev/null +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -0,0 +1,536 @@ +Title: Agreeing With Stalin in Ways that Exhibit Generally Rationalist Principles +Author: Zack M. Davis +Date: 2023-01-01 11:00 +Category: commentary +Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors +Status: draft + +> It was not the sight of Mitchum that made him sit still in horror. It was the realization that there was no one he could call to expose this thing and stop it—no superior anywhere on the line, from Colorado to Omaha to New York. They were in on it, all of them, they were doing the same, they had given Mitchum the lead and the method. It was Dave Mitchum who now belonged on this railroad and he, Bill Brent, who did not. +> +> _Atlas Shrugged_ by Ayn Rand + +[TODO: Sasha disaster, breakup with Vassar group] + +[TODO: "Unnatural Categories Are Optimized for Deception" + +Abram was right + +the fact that it didn't means that not tracking it can be an effective AI design! Just because evolution takes shortcuts that human engineers wouldn't doesn't mean shortcuts are "wrong" (instead, there are laws governing which kinds of shortcuts work). + +Embedded agency means that the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'my own' code." In that light, it makes sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about + +somehow accuracy seems more fundamental than power or resources ... could that be formalized? +] + +And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satsified. I still published "Unnatural Categories Are Optimized for Deception" in January 2021, but if I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war. + +[TODO: NYT affair and Brennan link +https://astralcodexten.substack.com/p/statement-on-new-york-times-article +https://reddragdiva.tumblr.com/post/643403673004851200/reddragdiva-topher-brennan-ive-decided-to-say +https://www.facebook.com/yudkowsky/posts/10159408250519228 + +Scott Aaronson on the Times's hit piece of Scott Alexander— +https://scottaaronson.blog/?p=5310 +> The trouble with the NYT piece is not that it makes any false statements, but just that it constantly insinuates nefarious beliefs and motives, via strategic word choices and omission of relevant facts that change the emotional coloration of the facts that it does present. + +] + +... except that Yudkowsky reopened the conversation in February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions and concluding that, "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_." + +(_Why?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging?) + +I explained what's wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), but I find myself still having more left to analyze. The February 2021 post on pronouns is a _fascinating_ document, in its own way—a penetrating case study on the effects of politics on a formerly great mind. + +Yudkowsky begins by setting the context of "[h]aving received a bit of private pushback" on his willingness to declare that asking someone to use a different pronoun is not lying. + +But ... the _reason_ he got a bit ("a bit") of private pushback was _because_ the original "hill of meaning" thread was so blatantly optimized to intimidate and delegitimize people who want to use language to reason about biological sex. The pushback wasn't about using trans people's preferred pronouns (I do that, too), or about not wanting pronouns to imply sex (sounds fine, if we were in the position of defining a conlang from scratch); the _problem_ is using an argument that's ostensibly about pronouns to sneak in an implicature ("Who competes in sports segregated around an Aristotelian binary is a policy question [ ] that I personally find very humorous") that it's dumb and wrong to want to talk about the sense in which trans women are male and trans men are female, as a _fact about reality_ that continues to be true even if it hurts someone's feelings, and even if policy decisions made on the basis of that fact are not themselves a fact (as if anyone had doubted this). + +In that context, it's revealing that in this post attempting to explain why the original thread seemed like a reasonable thing to say, Yudkowsky ... doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size." + +But ... I've never _measured_ how big someone's gametes are, have you? We can only _infer_ whether strangers' bodies are configured to produce small or large gametes by observing [a variety of correlated characteristics](https://en.wikipedia.org/wiki/Secondary_sex_characteristic). Furthermore, for trans people who don't pass but are visibly trying to, one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex. + +Thus, Yudkowsky's "default for those-who-haven't-asked that goes by gamete size" clause _can't be taken literally_. The only way I can make sense of it is to interpret it as a way to point at the prevailing reality that people are good at noticing what sex other people are, but that we want to be kind to people who are trying to appear to be the other sex, without having to admit to it. + +One could argue that this is hostile nitpicking on my part: that the use of "gamete size" as a metonym for sex here is either an attempt to provide an unambiguous definition (because if you said _female_ or _male sex_, someone could ask what you meant by that), or that it's at worst a clunky choice of words, not an intellectually substantive decision that can be usefully critiqued. + +But the claim that Yudkowsky is only trying to provide an unambiguous definition isn't consistent with the text's claim that "[i]t would still be logically rude to demand that other people use only your language system and interpretation convention in order to communicate, in advance of them having agreed with you about the clustering thing". And the post also seems to suggest that the motive isn't to avoid ambiguity. Yudkowsky writes: + +> In terms of important things? Those would be all the things I've read—from friends, from strangers on the Internet, above all from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate, or perhaps at all. +> +> And I'm not happy that the very language I use, would try to force me to take a position on that; not a complicated nuanced position, but a binarized position, _simply in order to talk grammatically about people at all_. + +What does the "tossed into a bucket" metaphor refer to, though? I can think of many different things that might be summarized that way, and my sympathy for the one who does not like to be tossed into a bucket depends on a lot on exactly what real-world situation is being mapped to the bucket. + +If we're talking about overt _gender role enforcement attempts_—things like, "You're a girl, therefore you need to learn to keep house for your future husband", or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket. + +(There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt-out of the default buckets, without causing too much trouble.) + +But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.4; your expectation that I be able to toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to recommend I get a prostate cancer screening on account of my being male and therefore at risk for prostate cancer, it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that. + +While piously appealing to the feelings of people describing reasons they do not want to be tossed into a Male Bucket or a Female Bucket, Yudkowsky does not seem to be distinguishing between reasons that have merit, and reasons that do not have merit. The post continues (bolding mine): + +> In a wide variety of cases, sure, ["he" and "she"] can clearly communicate the unambiguous sex and gender of something that has an unambiguous sex and gender, much as a different language might have pronouns that sometimes clearly communicated hair color to the extent that hair color often fell into unambiguous clusters. +> +> But if somebody's hair color is halfway between two central points? If their civilization has developed stereotypes about hair color they're not comfortable with, such that they feel that the pronoun corresponding to their outward hair color is something they're not comfortable with because they don't fit key aspects of the rest of the stereotype and they feel strongly about that? If they have dyed their hair because of that, or **plan to get hair surgery, or would get hair surgery if it were safer but for now are afraid to do so?** Then it's stupid to try to force people to take complicated positions about those social topics _before they are allowed to utter grammatical sentences_. + +So, I agree that a language convention in which pronouns map to hair color doesn't seem great, and that the people in this world should probably coordinate on switching to a better convention, if they can figure out how. + +But taking as given the existence of a convention in which pronouns refer to hair color, a demand to be refered to as having a hair color _that one does not in fact have_ seems pretty outrageous to me! + +It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of _genuine_ nuance brought on by a _genuine_ challenge to a system that _falsely_ assumes discrete hair colors. + +But ... "plan to get hair surgery"? "Would get hair surgery if it were safer but for now are afraid to do so"? In what sense do these cases present a challenge to the discrete system and therefore call for complication and nuance? There's nothing ambiguous about these cases: if you haven't, in fact, changed your hair color, then your hair is, in fact, its original color. The decision to get hair surgery does not _propagate backwards in time_. The decision to get hair surgery cannot be _imported from a counterfactual universe in which it is safer_. People who, today, do not have the hair color that they would prefer, are, today, going to have to deal with that fact _as a fact_. + +Is the idea that we want to use the same pronouns for the same person over time, so that if we know someone is going to get hair surgery—they have an appointment with the hair surgeon at this-and-such date—we can go ahead and switch their pronouns in advance? Okay, I can buy that. + +But extending that to the "would get hair surgery if it were safer" case is _absurd_. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive for insisting on complication and nuance is as an obfuscation tactic— + +Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? + +Maybe the problem is easier to see in the context of a non-gender example. [My previous hopeless ideological war—before this one—was against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I _hated_ being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all. + +I sometimes describe myself as "gender dysphoric", because our culture doesn't have better widely-understood vocabulary for my beautiful pure sacred self-identity thing, but if we're talking about suffering and emotional distress, my "student dysphoria" was _vastly_ worse than any "gender dysphoria" I've ever felt. + +But crucially, my tirades against the Student Bucket described reasons not just that _I didn't like it_, but reasons that the bucket was _actually wrong on the empirical merits_: people can and do learn important things by studying and practicing out of their own curiosity and ambition; the system was _actually in the wrong_ for assuming that nothing you do matters unless you do it on the command of a designated "teacher" while enrolled in a designated "course". + +And _because_ my war footing was founded on the empirical merits, I knew that I had to _update_ to the extent that the empirical merits showed that I was in the wrong. In 2010, I took a differential equations class "for fun" at the local community college, expecting to do well and thereby prove that my previous couple years of math self-study had been the equal of any schoolstudent's. + +In fact, I did very poorly and scraped by with a _C_. (Subjectively, I felt like I "understood the concepts", and kept getting surprised when that understanding somehow didn't convert into passing quiz scores.) That hurt. That hurt a lot. + +_It was supposed to hurt_. One could imagine a Jane Austen character in this situation doubling down on his antagonism to everything school-related, in order to protect himself from being hurt—to protest that the teacher hated him, that the quizzes were unfair, that the answer key must have had a printing error—in short, that he had been right in every detail all along, and that any suggestion otherwise was credentialist propaganda. + +I knew better than to behave like that—and to the extent that I was tempted, I retained my ability to notice and snap out of it. My failure _didn't_ mean I had been wrong about everything, that I should humbly resign myself to the Student Bucket forever and never dare to question it again—but it _did_ mean that I had been wrong about _something_. I could [update myself incrementally](https://www.lesswrong.com/posts/627DZcvme7nLDrbZu/update-yourself-incrementally)—but I _did_ need to update. (Probably, that "math" encompasses different subskills, and that my glorious self-study had unevenly trained some skills and not others: there was nothing contradictory about my [successfully generalizing one of the methods in the textbook to arbitrary numbers of variables](https://math.stackexchange.com/questions/15143/does-the-method-for-solving-exact-des-generalize-like-this), while _also_ [struggling with the class's assigned problem sets](https://math.stackexchange.com/questions/7984/automatizing-computational-skills).) + +Someone who uncritically validated my not liking to be tossed into the Student Bucket, instead of assessing my _reasons_ for not liking to be tossed into the Bucket and whether those reasons had merit, would be hurting me, not helping me—because in order to navigate the real world, I need a map that reflects the territory, rather than my narcissistic fantasies. I'm a better person for straightforwardly facing the shame of getting a _C_ in community college differential equations, rather than trying to deny it or run away from it or claim that it didn't mean anything. Part of updating myself incrementally was that I would get _other_ chances to prove that my autodidacticism _could_ match the standard set by schools. (My professional and open-source programming career obviously does not owe itself to the two Java courses I took at community college. When I audited honors analysis at UC Berkeley "for fun" in 2017, I did fine on the midterm. When applying for a new dayjob in 2018, the interviewer, noting my lack of a degree, said he was going to give a version of the interview without a computer science theory question. I insisted on being given the "college" version of the interview, solved a dynamic programming problem, and got the job. And so on.) + +If you can see why uncritically affirming people's current self-image isn't the right solution to "student dysphoria", it _should_ be obvious why the same is true of gender dysphoria. There's a very general underlying principle, that it matters whether someone's current self-image is actually true. + +In an article titled ["Actually, I Was Just Crazy the Whole Time"](https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole), FtMtF detransitioner Michelle Alleva contrasts her beliefs at the time of deciding to transition, with her current beliefs. While transitioning, she accounted for many pieces of evidence about herself ("dislike attention as a female", "obsessive thinking about gender", "didn't fit in with the girls", _&c_.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover everything on the original list: "It's because I'm autistic", "It's because I have unresolved trauma", "It's because women are often treated poorly" ... including "That wasn't entirely true" (!!). + +This is a _rationality_ skill. Alleva had a theory about herself, and then she _revised her theory upon further consideration of the evidence_. Beliefs about one's self aren't special and can—must—be updated using the _same_ methods that you would use to reason about anything else—[just as a recursively self-improving AI would reason the same about transistors "inside" the AI and transitors in "the environment."](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection) + +(Note, I'm specifically praising the _form_ of the inference, not necessarily the conclusion to detransition. If someone else in different circumstances weighed up the evidence about _them_-self, and concluded that they _are_ trans in some _specific_ objective sense on the empirical merits, that would _also_ be exhibiting the skill. For extremely sex-role-nonconforming same-natal-sex-attracted transsexuals, you can at least see why the "born in the wrong body" story makes some sense as a handwavy [first approximation](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/). It's just that for males like me, and separately for females like Michalle Alleva, the story doesn't add up.) + +This also isn't a particularly _advanced_ rationality skill. This is very basic—something novices should grasp during their early steps along the Way. + +Back in 'aught-nine, in the early days of _Less Wrong_, when I still hadn't grown out of [my teenage religion of psychological sex differences denialism](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), there was an exchange in the comment section between me and Yudkowsky that still sticks with me. Yudkowsky had claimed that he had ["never known a man with a true female side, and [...] never known a woman with a true male side, either as authors or in real life."](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2) Offended at our leader's sexism, I passive-aggressively [asked him to elaborate](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way?commentId=AEZaakdcqySmKMJYj), and as part of [his response](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF), he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [_sic_]." + +[I replied](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b) (bolding added): + +> I sometimes wish that certain men would appreciate that not all men are like them—**or at least, that not all men _want_ to be like them—that the fact of masculinity is [not _necessarily_ something to integrate](https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me).** + +_I knew_. Even then, _I knew_ I had to qualify my not liking to be tossed into a Male Bucket. I could object to Yudkowsky speaking as if men were a collective with shared normative ideals ("a lifetime's work to integrate"), but I couldn't claim to somehow not be male, or _even_ that people couldn't make probabilistic predictions about me given the fact that I'm male ("the fact of masculinity"), _because that would be crazy_. The culture of early _Less Wrong_ wouldn't have let me get away with that. + +It would seem that in the current year, that culture is dead—or at least, if it does have any remaining practitioners, they do not include Eliezer Yudkowsky. + +At this point, some people would argue that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post does _also_ explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I _agree_ that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he "really meant" to communicate the reading that does make sense, rather than the one that doesn't make sense? + +I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just _too talented of a writer_ for me to excuse his words as an accidental artifact of unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about, I think it's _deliberately_ ambiguous. When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some other channel than their words straightforwardly reflecting the truth. Someone who was _actually_ stupid wouldn't be able to generate text with a specific balance of insight and selective stupidity fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point of the post is to pander to the biological sex denialists in his robot cult, without technically saying anything unambiguously false that someone could point out as a "lie." + +Consider the implications of Yudkowsky giving as a clue as to the political forces as play in the form of [a disclaimer comment](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228): + +> It unfortunately occurs to me that I must, in cases like these, disclaim that—to the extent there existed sensible opposing arguments against what I have just said—people might be reluctant to speak them in public, in the present social atmosphere. That is, in the logical counterfactual universe where I knew of very strong arguments against freedom of pronouns, I would have probably stayed silent on the issue, as would many other high-profile community members, and only Zack M. Davis would have said anything where you could hear it. +> +> This is a filter affecting your evidence; it has not to my own knowledge filtered out a giant valid counterargument that invalidates this whole post. I would have kept silent in that case, for to speak then would have been dishonest. +> +> Personally, I'm used to operating without the cognitive support of a civilization in controversial domains, and have some confidence in my own ability to independently invent everything important that would be on the other side of the filter and check it myself before speaking. So you know, from having read this, that I checked all the speakable and unspeakable arguments I had thought of, and concluded that this speakable argument would be good on net to publish, as would not be the case if I knew of a stronger but unspeakable counterargument in favor of Gendered Pronouns For Everyone and Asking To Leave The System Is Lying. +> +> But the existence of a wide social filter like that should be kept in mind; to whatever quantitative extent you don't trust your ability plus my ability to think of valid counterarguments that might exist, as a Bayesian you should proportionally update in the direction of the unknown arguments you speculate might have been filtered out. + +So, the explanation of [the problem of political censorship filtering evidence](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) here is great, but the part where Yudkowsky claims "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter" is just _laughable_. My point that _she_ and _he_ have existing meanings that you can't just ignore by fiat given that the existing meanings are _exactly_ what motivate people to ask for new pronouns in the first place is _really obvious_. + +Really, it would be _less_ embarassing for Yudkowsky if he were outright lying about having tried to think of counterarguments. The original post isn't _that_ bad if you assume that Yudkowsky was writing off the cuff, that he clearly just _didn't put any effort whatsoever_ into thinking about why someone might disagree. If he _did_ put in the effort—enough that he felt comfortable bragging about his ability to see the other side of the argument—and _still_ ended up proclaiming his "simplest and best protocol" without even so much as _mentioning_ any of its incredibly obvious costs ... that's just _pathetic_. If Yudkowsky's ability to explore the space of arguments is _that_ bad, why would you trust his opinion about _anything_? + +[TODO: discrediting to the community] + +The disclaimer comment mentions "speakable and unspeakable arguments"—but what, one wonders, is the boundary of the "speakable"? In response to a commenter mentioning the cost of having to remember pronouns as a potential counterargument, Yudkowsky [offers us another clue](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228&reply_comment_id=10159421871809228): + +> People might be able to speak that. A clearer example of a forbidden counterargument would be something like e.g. imagine if there was a pair of experimental studies somehow proving that (a) everybody claiming to experience gender dysphoria was lying, and that (b) they then got more favorable treatment from the rest of society. We wouldn't be able to talk about that. No such study exists to the best of my own knowledge, and in this case we might well hear about it from the other side to whom this is the exact opposite of unspeakable; but that would be an example. + +(As an aside, the wording of "we might well hear about it from _the other side_" (emphasis mine) is _very_ interesting, suggesting that the so-called "rationalist" community, is, effectively, a partisan institution, despite its claims to be about advancing the generically human art of systematically correct reasoning.) + +I think (a) and (b) _as stated_ are clearly false, so "we" (who?) fortunately aren't losing much by allegedly not being able to speak them. But what about some _similar_ hypotheses, that might be similarly unspeakable for similar reasons? + +Instead of (a), consider the claim that (a′) self-reports about gender dysphoria are substantially distorted by [socially-desirable responding tendencies](https://en.wikipedia.org/wiki/Social-desirability_bias)—as a notable and common example, heterosexual males with [sexual fantasies about being female](http://www.annelawrence.com/autogynephilia_&_MtF_typology.html) [often falsely deny or minimize the erotic dimension of their desire to change sex](/papers/blanchard-clemmensen-steiner-social_desirability_response_set_and_systematic_distortion.pdf) (The idea that self-reports can be motivatedly inaccurate without the subject consciously "lying" should not be novel to someone who co-blogged with [Robin Hanson](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain) for years!) + +And instead of (b), consider the claim that (b′) transitioning is socially rewarded within particular _subcultures_ (although not Society as a whole), such that many of the same people wouldn't think of themselves as trans or even gender-dysphoric if they lived in a different subculture. + +I claim that (a′) and (b′) are _overwhelmingly likely to be true_. Can "we" talk about _that_? Are (a′) and (b′) "speakable", or not? We're unlikely to get clarification from Yudkowsky, but based on the Whole Dumb Story I've been telling you about how I wasted the last six years of my life on this, I'm going to _guess_ that the answer is broadly No: no, "we" can't talk about that. (_I_ can say it, and people can debate me in a private Discord server where the general public isn't looking, but it's not something someone of Yudkowsky's stature can afford to acknowledge.) + +But if I'm right that (a′) and (b′) should be live hypotheses and that Yudkowsky would consider them "unspeakable", that means "we" can't talk about what's _actually going on_ with gender dysphoria and transsexuality, which puts the whole discussion in a different light. In another comment, Yudkowsky lists some gender-transition interventions he named the [November 2018 "hill of meaning in defense of validity" Twitter thread](https://twitter.com/ESYudkowsky/status/1067183500216811521)—using a different bathroom, changing one's name, asking for new pronouns, and getting sex reassignment surgery—and notes that none of these are calling oneself a "woman". [He continues](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421986539228&reply_comment_id=10159424960909228): + +> [Calling someone a "woman"] _is_ closer to the right sort of thing _ontologically_ to be true or false. More relevant to the current thread, now that we have a truth-bearing sentence, we can admit of the possibility of using our human superpower of language to _debate_ whether this sentence is indeed true or false, and have people express their nuanced opinions by uttering this sentence, or perhaps a more complicated sentence using a bunch of caveats, or maybe using the original sentence uncaveated to express their belief that this is a bad place for caveats. Policies about who uses what bathroom also have consequences and we can debate the goodness or badness (not truth or falsity) of those policies, and utter sentences to declare our nuanced or non-nuanced position before or after that debate. +> +> Trying to pack all of that into the pronouns you'd have to use in step 1 is the wrong place to pack it. + +Sure, _if we were in the position of designing a constructed language from scratch_ under current social conditions in which a person's "gender" is understood as a contested social construct, rather than their sex being an objective and undisputed fact, then yeah: in that situation _which we are not in_, you definitely wouldn't want to pack sex or gender into pronouns. But it's a disingenuous derailing tactic to grandstand about how people need to alter the semantics of their _already existing_ native language so that we can discuss the real issues under an allegedly superior pronoun convention when, _by your own admission_, you have _no intention whatsoever of discussing the real issues!_ + +(Lest the "by your own admission" clause seem too accusatory, I should note that given constant behavior, admitting it is _much_ better than not-admitting it; so, huge thanks to Yudkowsky for the transparency on this point!) + +Again, as discussed in "Challenges to Yudkowsky's Pronoun Reform Proposal", a comparison to [the _tú_/_usted_ distinction](https://en.wikipedia.org/wiki/Spanish_personal_pronouns#T%C3%BA/vos_and_usted) is instructive. It's one thing to advocate for collapsing the distinction and just settling on one second-person singular pronoun for the Spanish language. That's principled. + +It's quite another thing altogether to _simultaneously_ try to prevent a speaker from using _tú_ to indicate disrespect towards a social superior (on the stated rationale that the _tú_/_usted_ distinction is dumb and shouldn't exist), while _also_ refusing to entertain or address the speaker's arguments explaining _why_ they think their interlocutor is unworthy of the deference that would be implied by _usted_ (because such arguments are "unspeakable" for political reasons). That's just psychologically abusive. + +If Yudkowsky _actually_ possessed (and felt motivated to use) the "ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking", it would be _obvious_ to him that "Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" isn't the hill anyone would care about dying on if it weren't a Schelling point. A lot of TERF-adjacent folk would be _overjoyed_ to concede the (boring, insubstantial) matter of pronouns as a trivial courtesy if it meant getting to _actually_ address their real concerns of "Biological Sex Actually Exists", and ["Biological Sex Cannot Be Changed With Existing or Foreseeable Technology"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and "Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity." The reason so many of them are inclined to stand their ground and not even offer the trivial courtesy is because they suspect, correctly, that the matter of pronouns is being used as a rhetorical wedge to try to prevent people from talking or thinking about sex. + +Having analyzed the _ways_ in which Yudkowsky is playing dumb here, what's still not entirely clear is _why_. Presumably he cares about maintaining his credibility as an insightful and fair-minded thinker. Why tarnish that by putting on this haughty performance? + +Of course, presumably he _doesn't_ think he's tarnishing it—but why not? [He graciously explains in the Facebook comments](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228&reply_comment_id=10159421901809228): + +> it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment [...] I think people are better off at the end of that. + +Ah, _prudence_! He continues: + +> I don't see what the alternative is besides getting shot, or utter silence about everything Stalin has expressed an opinion on including "2 + 2 = 4" because if that logically counterfactually were wrong you would not be able to express an opposing opinion. + +The problem with trying to "exhibit generally rationalist principles" in an line of argument that you're constructing in order to be prudent and not community-harmful, is that you're thereby necessarily _not_ exhibiting the central rationalist principle that what matters is the process that _determines_ your conclusion, not the reasoning you present to _reach_ your conclusion, after the fact. + +The best explanation of this I know of was authored by Yudkowsky himself in 2007, in a post titled ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument). It's worth quoting at length. The Yudkowsky of 2007 invites us to consider the plight of a political campaign manager: + +> As a campaign manager reading a book on rationality, one question lies foremost on your mind: "How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?" +> +> Sorry. It can't be done. +> +> "What?" you cry. "But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes's Rule?" +> +> Sorry. It still can't be done. You defeated yourself the instant you specified your argument's conclusion in advance. + +The campaign manager is in possession of a survey of mayoral candidates on which Snodgrass compares favorably to other candidates, except for one question. The post continues (bolding mine): + +> So you are tempted to publish the questionnaire as part of your own campaign literature ... with the 11th question omitted, of course. +> +> **Which crosses the line between _rationality_ and _rationalization_.** It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence. +> +> Indeed, **you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it.** "What!" you cry. "A campaign should publish facts unfavorable to their candidate?" But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn't, if you were genuinely curious. If you were flowing _forward_ from the evidence to an unknown choice of candidate, rather than flowing _backward_ from a fixed candidate to determine the arguments. + +The post then briefly discusses the idea of a "logical" argument, one whose conclusions follow from its premises. "All rectangles are quadrilaterals; all squares are quadrilaterals; therefore, all squares are rectangles" is given as an example of _illogical_ argument, even though both premises are true (all rectangles and squares are in fact quadrilaterals) _and_ the conclusion is true (all squares are in fact rectangles). The problem is that the conclusion doesn't _follow_ from the premises; the _reason_ all squares are rectangles isn't _because_ they're both quadrilaterals. If we accepted arguments of the general _form_ "all A are C; all B are C; therefore all A are B", we would end up believing nonsense. + +Yudkowsky's conception of a "rational" argument—at least, Yudkowsky's conception in 2007, which the Yudkowsky of the current year seems to disagree with—has a similar flavor: the stated reasons should be the actual reasons. The post concludes: + +> If you really want to present an honest, rational argument _for your candidate_, in a political campaign, there is only one way to do it: +> +> * _Before anyone hires you_, gather up all the evidence you can about the different candidates. +> * Make a checklist which you, yourself, will use to decide which candidate seems best. +> * Process the checklist. +> * Go to the winning candidate. +> * Offer to become their campaign manager. +> * When they ask for campaign literature, print out your checklist. +> +> Only in this way can you offer a _rational_ chain of argument, one whose bottom line was written flowing _forward_ from the lines above it. Whatever _actually_ decides your bottom line is the only thing you can _honestly_ write on the lines above. + +I remember this being pretty shocking to read back in 'aught-seven. What an alien mindset! But it's _correct_. You can't rationally argue "for" a chosen conclusion, because only the process you use to _decide what to argue for_ can be your real reason. + +This is a shockingly high standard for anyone to aspire to live up to—but what made Yudkowsky's Sequences so life-changingly valuable, was that they articulated the _existence_ of such a standard. For that, I will always be grateful. + +... which is why it's so _bizarre_ that the Yudkowsky of the current year acts like he's never heard of it! If your _actual_ bottom line is that it is sometimes personally prudent and not community-harmful to post your agreement with Stalin, then sure, you can _totally_ find something you agree with to write on the lines above! Probably something that "exhibits generally rationalist principles", even! It's just that any rationalist who sees the game you're playing is going to correctly identify you as a _partisan hack_ on this topic and take that into account when deciding whether they can trust you on other topics. + +"I don't see what the alternative is besides getting shot," Yudkowsky muses (where presumably, 'getting shot' is a metaphor for a large negative utility, like being unpopular with progressives). Yes, an astute observation! And _any other partisan hack could say exactly the same_, for the same reason. Why does the campaign manager withhold the results of the 11th question? Because he doesn't see what the alternative is besides getting shot. + +Yudkowsky [sometimes](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings) [quotes](https://twitter.com/ESYudkowsky/status/1456002060084600832) _Calvin and Hobbes_: "I don't know which is worse, that everyone has his price, or that the price is always so low." If the idea of being fired from the Snodgrass campaign or being unpopular with progressives is _so_ terrifying to you that it seems analogous to getting shot, then, if those are really your true values, then sure—say whatever you need to say to keep your job and your popularity, as is personally prudent. You've set your price. But if the price you put on the intellectual integrity of your so-called "rationalist" community is similar to that of the Snodgrass for Mayor campaign, you shouldn't be surprised if intelligent, discerning people accord similar levels of credibility to the two groups' output. + +I see the phrase "bad faith" thrown around more than I think people know what it means. "Bad faith" doesn't mean "with ill intent", and it's more specific than "dishonest": it's [adopting the surface appearance of being moved by one set of motivations, while actually acting from another](https://en.wikipedia.org/wiki/Bad_faith). + +For example, an [insurance company employee](https://en.wikipedia.org/wiki/Claims_adjuster) who goes through the motions of investigating your claim while privately intending to deny it might never consciously tell an explicit "lie", but is definitely acting in bad faith: they're asking you questions, demanding evidence, _&c._ in order to _make it look like_ you'll get paid if you prove the loss occurred—whereas in reality, you're just not going to be paid. Your responses to the claim inspector aren't completely casually _inert_: if you can make an extremely strong case that the loss occurred as you say, then the claim inspector might need to put some effort into coming up with some ingenious excuse to deny your claim, in ways that exhibit general claim-inspection principles. But at the end of the day, the inspector is going to say what they need to say in order to protect the company's loss ratio, as is sometimes personally prudent. + +With this understanding of bad faith, we can read Yudkowsky's "it is sometimes personally prudent [...]" comment as admitting that his behavior on politically-charged topics is in bad faith—where "bad faith" isn't a meaningless insult, but [literally refers](http://benjaminrosshoffman.com/can-crimes-be-discussed-literally/) to the pretending-to-have-one-set-of-motivations-while-acting-according-to-another behavior, such that accusations of bad faith can be true or false. Yudkowsky will take care not to consciously tell an explicit "lie", while going through the motions to _make it look like_ he's genuinely engaging with questions where I need the right answers in order to make extremely impactful social and medical decisions—whereas in reality, he's only going to address a selected subset of the relevant evidence and arguments that won't get him in trouble with progressives. + +To his credit, he _will_ admit that he's only willing to address a selected subset of arguments—but while doing so, he claims an absurd "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking" while _simultaneously_ blatantly mischaracterizing his opponents' beliefs! ("Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" doesn't pass anyone's [ideological Turing test](https://www.econlib.org/archives/2011/06/the_ideological.html).) + +Counterarguments aren't completely causally _inert_: if you can make an extremely strong case that Biological Sex Is Sometimes More Relevant Than Self-Declared Gender Identity, Yudkowsky will put some effort into coming up with some ingenious excuse for why he _technically_ never said otherwise, in ways that exhibit generally rationalist principles. But at the end of the day, Yudkowsky is going to say what he needs to say in order to protect his reputation, as is sometimes personally prudent. + +Even if one were to agree with this description of Yudkowsky's behavior, it doesn't immediately follow that Yudkowsky is making the wrong decision. Again, "bad faith" is meant as a literal description that makes predictions about behavior, not a contentless attack—maybe there are some circumstances in which engaging some amount of bad faith is the right thing to do, given the constraints one faces! For example, when talking to people on Twitter with a very different ideological background from me, I sometimes anticipate that if my interlocutor knew what I was actually thinking, they wouldn't want to talk to me, so I occasionally engage in a bit of what could be called ["concern trolling"](https://geekfeminism.fandom.com/wiki/Concern_troll): I take care to word my replies in a way that makes it look like I'm more ideologically aligned with my interlocutor than I actually am. (For example, I [never say "assigned female/male at birth" in my own voice on my own platform](/2019/Sep/terminology-proposal-developmental-sex/), but I'll do it in an effort to speak my interlocutor's language.) I think of this as the _minimal_ amount of strategic bad faith needed to keep the conversation going, to get my interlocutor to evaluate my argument on its own merits, rather than rejecting it for coming from an ideological enemy. In cases such as these, I'm willing to defend my behavior as acceptable—there _is_ a sense in which I'm being deceptive by optimizing my language choice to make my interlocutor make bad guesses about my ideological alignment, but I'm comfortable with that amount and scope of deception in the service of correcting the distortion where I don't think my interlocutor _should_ be paying attention to my personal alignment. + +That is, my bad faith concern-trolling gambit of deceiving people about my ideological alignment in the hopes of improving the discussion seems like something that makes our collective beliefs about the topic-being-argued-about _more_ accurate. (And the topic-being-argued-about is presumably of greater collective interest than which "side" I personally happen to be on.) + +In contrast, the "it is sometimes personally prudent [...] to post your agreement with Stalin" gambit is the exact reverse: it's _introducing_ a distortion into the discussion in the hopes of correcting people's beliefs about the speaker's ideological alignment. (Yudkowsky is not a right-wing Bad Guy, but people would tar him as a right-wing Bad Guy if he ever said anything negative about trans people.) This doesn't improve our collective beliefs about the topic-being-argued about; it's a _pure_ ass-covering move. + +Yudkowsky names the alleged fact that "people do _know_ they're living in a half-Stalinist environment" as a mitigating factor. But the _reason_ censorship is such an effective tool in the hands of dictators like Stalin is because it ensures that many people _don't_ know—and that those who know (or suspect) don't have [game-theoretic common knowledge](https://www.lesswrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge#Dictators_and_freedom_of_speech) that others do too. + +Zvi Mowshowitz has [written about how the false assertion that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) is typically used justify deception: if "everybody knows" that we can't talk about biological sex (the rationalization goes), then no one is being deceived when our allegedly truthseeking discussion carefully steers clear of any reference to the reality of biological sex when it would otherwise be extremely relevant. + +But if it were _actually_ the case that everybody knew (and everybody knew that everybody knew), then what would be the point of the censorship? It's not coherent to claim that no one is being harmed by censorship because everyone knows about it, because the entire appeal and purpose of censorship is precisely that _not_ everybody knows and that someone with power wants to _keep_ it that way. + +For the savvy people in the know, it would certainly be _convenient_ if everyone secretly knew: then the savvy people wouldn't have to face the tough choice between acceding to Power's demands (at the cost of deceiving their readers) and informing their readers (at the cost of incurring Power's wrath). + +Policy debates should not appear one-sided. Faced with this kind of dilemma, I can't say that defying Power is necessarily the right choice: if there really _were_ no other options between deceiving your readers with a bad faith performance, and incurring Power's wrath, and Power's wrath would be too terrible to bear, then maybe deceiving your readers with a bad faith performance is the right thing to do. + +But if you _actually cared_ about not deceiving your readers, you would want to be _really sure_ that those _really were_ the only two options. You'd [spend five minutes by the clock looking for third alternatives](https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative)—including, possibly, not issuing proclamations on your honor as leader of the so-called "rationalist" community on topics where you _explicitly intend to ignore counteraguments on grounds of their being politically unfavorable_. Yudkowsky rejects this alternative on the grounds that it allegedly implies "utter silence about everything Stalin has expressed an opinion on including '2 + 2 = 4' because if that logically counterfactually were wrong you would not be able to express an opposing opinion", but this seems like yet another instance of Yudkowsky motivatedly playing dumb: _if he wanted to_, I'm sure Eliezer Yudkowsky could think of _some relevant differences_ between "2 + 2 = 4" (a trivial fact of arithmetic) and "the simplest and best protocol is, "'He' refers to the set of people who have asked us to use 'he'" (a complex policy proposal whose numerous flaws I have analyzed in detail). + +"[P]eople do _know_ they're living in a half-Stalinist environment," Yudkowsky says. "I think people are better off at the end of that," he says. But who are "people", specifically? One of the problems with utilitarianism is that it doesn't interact well with game theory. If a policy makes most people better off, at the cost of throwing a few others under the bus, is it the right thing to do? Depending on the details, maybe! But you probably shouldn't expect the victims to meekly go under the wheels without a fight. That's why I'm telling you this 50,000-word sob story about how _I_ didn't know, and _I'm_ not better off. + +In [one of Yudkowsky's roleplaying fiction threads](https://www.glowfic.com/posts/4508), Thellim, a woman hailing from [a saner alternate version of Earth called dath ilan](https://www.lesswrong.com/tag/dath-ilan), [expresses horror and disgust at how shallow and superficial the characters in Jane Austen's _Pride and Prejudice_ are, in contrast to what a human being _should_ be](https://www.glowfic.com/replies/1592898#reply-1592898): + +> [...] the author has made zero attempt to even try to depict Earthlings as having reflection, self-observation, a fire of inner life; most characters in _Pride and Prejudice_ bear the same relationship to human minds as a stick figure bears to a photograph. People, among other things, have the property of trying to be people; the characters in Pride and Prejudice have no visible such aspiration. Real people have concepts of their own minds, and contemplate their prior ideas of themselves in relation to a continually observed flow of their actual thoughts, and try to improve both their self-models and their selves. It's impossible to imagine any of these people, even Elizabeth, as doing that thing Thellim did a few hours ago, where she noticed she was behaving like Verrez and snapped out of it. Just like any particular Verrez always learns to notice he is being Verrez and snap out of it, by the end of any of his alts' novels. + +When someone else doesn't see the problem with Jane Austen's characters, Thellim [redoubles her determination to explain the problem](https://www.glowfic.com/replies/1592987#reply-1592987): "_She is not giving up that easily. Not on an entire planet full of people._" + +Thellim's horror at the fictional world of Jane Austen is basically how I feel about "trans" culture in the current year. It _actively discourages self-modeling!_ People who have cross-sex fantasies are encouraged to reify them into a gender identity which everyone else is supposed to unquestioningly accept. Obvious critical questions about what's actually going on etiologically, what it means for an identity to be true, _&c._ are strongly discouraged as hateful, hurtful, distressing, _&c._ + +The problem is _not_ that I think there's anything wrong with having cross-sex fantasies, and wanting the fantasy to become real—just as Thellim's problem with _Pride and Prejudice_ is not there being anything wrong with wanting to marry a suitable bachelor. These are perfectly respectable goals. + +The _problem_ is that people who are trying to be people, people who are trying to acheive their goals _in reality_, do so in a way that involves having concepts of their own minds, and trying to improve both their self-models and their selves—and that's _not possible_ in a culture that tries to ban, as heresy, the idea that it's possible for someone's self-model to be wrong. + +A trans woman I follow on Twitter complained that a receptionist at her workplace said she looked like some male celebrity. "I'm so mad," she fumed. "I look like this right now"—there was a photo attached to the Tweet—"how could anyone ever think that was an okay thing to say?" + +It _is_ genuinely sad that the author of those Tweets didn't get perceived the way she would prefer! But the thing I want her to understand, a thing I think any sane adult should understand— + +_It was a compliment!_ That receptionist was almost certainly thinking of [David Bowie](https://en.wikipedia.org/wiki/David_Bowie) or [Eddie Izzard](https://en.wikipedia.org/wiki/Eddie_Izzard), rather than being hateful and trying to hurt. + +The author should have graciously accepted the compliment, and _done something to pass better next time_. The horror of trans culture is that it's impossible to imagine any of these people doing that—of noticing that they're behaving like a TERF's hostile stereotype of a narcissistic, gaslighting trans-identified male and snapping out of it. + +I want a shared cultural understanding that the _correct_ way to ameliorate the genuine sadness of people not being perceived the way they prefer is through things like _better and cheaper facial feminization surgery_, not _[emotionally blackmailing](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) people out of their ability to report what they see_. I don't _want_ to reliniqush [my ability to notice what women's faces look like](/papers/bruce_et_al-sex_discrimination_how_do_we_tell.pdf), even if that means noticing that mine isn't; if I'm sad that it isn't, I can endure the sadness if the alternative is _forcing everyone in my life to doublethink around their perceptions of me_. + +In a world where surgery is expensive, but some people desperately want to change sex and other people want to be nice to them, there's an incentive gradient in the direction of re-binding our shared concept of "gender" onto things like [ornamental clothing](http://thetranswidow.com/2021/02/18/womens-clothing-is-always-drag-even-on-women/) that are easier to change than secondary sex characteristics. + +But I would have expected people with the barest inkling of self-awareness and honesty to ... notice the incentives, and notice the problems being created by the incentives, and to talk about the problems in public so that we can coordinate on the best solution, [whatever that turns out to be](/2021/Sep/i-dont-do-policy/)? + +And if that's too much to expect of the general public— + +And if it's too much to expect garden-variety "rationalists" to figure out on their own without prompting from their superiors— + +Then I would have at least expected Eliezer Yudkowsky to take actions _in favor of_ rather than _against_ his faithful students having these very basic capabilities for reflection, self-observation, and ... _speech_? I would have expected Eliezer Yudkowsky to not _actively exert optimization pressure in the direction of transforming me into a Jane Austen character_. + +This is the part where Yudkowsky or his flunkies accuse me of being uncharitable, of failing at perspective-taking. Obviously, Yudkowsky doesn't _think of himself_ as trying to transform his faithful students into Jane Austen characters. One might ask if it does not therefore follow that I have failed to understand his position? [As Yudkowsky put it](https://twitter.com/ESYudkowsky/status/1435618825198731270): + +> The Other's theory of themselves usually does not make them look terrible. And you will not have much luck just yelling at them about how they must really be doing `terrible_thing` instead. + +But the substance of my accusations is not about Yudkowsky's _conscious subjective narrative_. I don't have a lot of uncertainty about Yudkowsky's _theory of himself_, because he told us that, very clearly: "it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment." I don't doubt that that's [how the algorithm feels from the inside](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside). + +But my complaint is about the work the algorithm is _doing_ in Stalin's service, not about how it _feels_; I'm talking about a pattern of _publicly visible behavior_ stretching over years. (Thus, "take actions" in favor of/against, rather than "be"; "exert optimization pressure in the direction of", rather than "try".) I agree that everyone has a story in which they don't look terrible, and that people mostly believe their own stories, but _it does not therefore follow_ that no one ever does anything terrible. + +I agree that you won't have much luck yelling at the Other about how they must really be doing `terrible_thing`. (People get very invested in their own stories.) But if you have the _receipts_ of the Other repeatedly doing `terrible_thing` in public over a period of years, maybe yelling about it to _everyone else_ might help _them_ stop getting suckered by the Other's fraudulent story. + +Let's recap. + +[TODO: recap— +* in 2009, "Changing Emotions" +* in 2016, "20% of the ones with penises" +* ... +] + +I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. + +_After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female counterpart" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. + +At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. + +On "his turn", he comes up with some pompous proclamation that's very obviously optimized to make the "pro-trans" faction look smart and good and make the "anti-trans" faction look dumb and bad, "in ways that exhibit generally rationalist principles." + +On "my turn", I put in an _absurd_ amount of effort explaining in exhaustive, _exhaustive_ detail why Yudkowsky's pompous proclamation, while [not technically saying making any unambiguously "false" atomic statements](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly), was _substantively misleading_ as constrated to what any serious person would say if they were actually trying to make sense of the world without worrying what progressive activists would think of them. + +In the context of AI alignment theory, Yudkowsky has written about a "nearest unblocked strategy" phenomenon: if you directly prevent an agent from accomplishing a goal via some plan that you find undesirable, the agent will search for ways to route around that restriction, and probably find some plan that you find similarly undesirable for similar reasons. + +Suppose you developed an AI to [maximize human happiness subject to the constraint of obeying explicit orders](https://arbital.greaterwrong.com/p/nearest_unblocked#exampleproducinghappiness). It might first try administering heroin to humans. When you order it not to, it might switch to administering cocaine. When you order it to not use any of a whole list of banned happiness-producing drugs, it might switch to researching new drugs, or just _pay_ humans to take heroin, _&c._ + +It's the same thing with Yudkowsky's political-risk minimization subject to the constraint of not saying anything he knows to be false. First he comes out with ["I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) (March 2016). When you point out that [that's not true](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), then the next time he revisits the subject, he switches to ["you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning"](https://archive.is/Iy8Lq) (November 2018). When you point out that [_that's_ not true either](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong), he switches to "It is Shenanigans to try to bake your stance on how clustered things are [...] _into the pronoun system of a language and interpretation convention that you insist everybody use_" (February 2021). When you point out [that's not what's going on](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), he switches to ... I don't know, but he's a smart guy; in the unlikely event that he sees fit to respond to this post, I'm sure he'll be able to think of _something_—but at this point, _I have no reason to care_. Talking to Yudkowsky on topics where getting the right answer would involve acknowledging facts that would make you unpopular in Berkeley is a _waste of everyone's time_; trying to inform you isn't [his bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line). + +Accusing one's interlocutor of bad faith is frowned upon for a reason. We would prefer to live in a world where we have intellectually fruitful object-level discussions under the assumption of good faith, rather than risk our fora degenerating into an acrimonious brawl of accusations and name-calling, which is unpleasant and (more importantly) doesn't make any intellectual progress. I, too, would prefer to have a real object-level discussion under the assumption of good faith. + +Accordingly, I tried the object-level good-faith argument thing _first_. I tried it for _years_. But at some point, I think I should be _allowed to notice_ the nearest-unblocked-strategy game which is _very obviously happening_ if you look at the history of what was said. I think there's _some_ number of years and _some_ number of thousands of words of litigating the object-level _and_ the meta level after which there's nothing left for me to do but jump up to the meta-meta level and explain, to anyone capable of hearing it, why in this case I think I've accumulated enough evidence for the assumption of good faith to have been _empirically falsified_. + +(Obviously, if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a _pretty good_ job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly miscalibratedly arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate.) + +What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing technology because the category encompasses so many high-dimensional details? Not original to me! I [filled in a few technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries compress your anticipated experiences? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) + +Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say, are things he _already said_, that anyone could just look up! + +I guess the point is that the egregore doesn't have the logical or reading comprehension for that?—or rather the egregore has no reason to care about the past; if you get tagged by the mob as an Enemy, your past statements will get dug up as evidence of foul present intent, but if you're doing good enough of playing the part today, no one cares what you said in 2009? + +Does ... does he expect the rest of us not to _notice_? Or does he think that "everybody knows"? + +But I don't, think that everybody knows. And I'm not, giving up that easily. Not on an entire subculture full of people. + +Yudkowsky [defends his behavior](https://twitter.com/ESYudkowsky/status/1356812143849394176): + +> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. + +[TODO: first of all, "A Rational Arugment" is very explicit about "not have been as willing to Tweet a truth helping the side" meaning you've crossed the line; second of all, it's if anything more plausible that trans women will matter to AGI, as I pointed out in my email] + +But the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" _vs._ "anti-trans". (The central tendency of the contemporary trans rights movement is firmly on the Wrong Side, but that's not the same thing as all trans people as individuals.) That's why Jessica joined our posse to try to argue with Yudkowsky in early 2019. (She wouldn't have, if my objection had been, "trans is fake; trans people Bad".) That's why Somni—one of the trans women who [infamously protested the 2019 CfAR reunion](https://www.ksro.com/2019/11/18/new-details-in-arrests-of-masked-camp-meeker-protesters/) for (among other things) CfAR allegedly discriminating against trans women—[understands what I've been saying](https://somnilogical.tumblr.com/post/189782657699/legally-blind). + +The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently stated by Scott Alexander (redacting the irrelevant object-level example): + +> I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should. + +This is a battle between Feelings and Truth, between Politics and Truth. + +In order to take the side of Truth, you need to be able to tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him). You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him). And you need to be able to say, in public, that trans women are male and trans men are female _with respect to_ a female/male "sex" concept that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide). + +If you don't want to say those things because hurting people is wrong, then you have chosen Feelings. + +Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only acting in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. + +Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. + + +[TODO— finish Yudkowsky trying to be a religious leader +Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. + +If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_. + +] + +[TODO section existential stakes, cooperation] + +> [_Perhaps_, replied the cold logic](https://www.yudkowsky.net/other/fiction/the-sword-of-good). _If the world were at stake._ +> +> _Perhaps_, echoed the other part of himself, _but that is not what was actually happening._ + +[TODO: social justice and defying threats + +at least Sabbatai Zevi had an excuse: his choices were to convert to Islam or be impaled https://en.wikipedia.org/wiki/Sabbatai_Zevi#Conversion_to_Islam +] + + +I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_. + +I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)). I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4. + +But if he's _then_ going to take a shit on c3 of my chessboard (["In terms of important things? Those would be all the things I've read [...] describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), + + + + +The turd on c3 is a pretty big likelihood ratio! + + + + + +[TODO: the dolphin war, our thoughts about dolphins are literally downstream from Scott's political incentives in 2014; this is a sign that we're a cult + +https://twitter.com/ESYudkowsky/status/1404700330927923206 +> That is: there's a story here where not just particular people hounding Zack as a responsive target, but a whole larger group, are engaged in a dark conspiracy that is all about doing damage on issues legible to Zack and important to Zack. This is merely implausible on priors. + +I mean, I wouldn't _call_ it a "dark conspiracy" exactly, but if the people with intellectual authority are computing what to say on the principle of "it is sometimes personally prudent and not community-harmful to post [their] agreement with Stalin", and Stalin cares a lot about doing damage on issues legible and important to me, then, pragmatically, I think that has _similar effects_ on the state of our collective knowledge as a dark conspiracy, even if the mechanism of coordination is each individual being separately terrified of Stalin, rather than them meeting with dark robes to plot under a full moon. + +[when you consider the contrast between how Yudkowsky talks about sex differences, and how he panders to trans people—that really does look like he's participating in a conspiracy to do damage on issues legible to me; if there's no conspiracy, how else am I supposed to explain the difference?] + +] + +[TODO: sneering at post-rats; David Xu interprets criticism of Eliezer as me going "full post-rat"?! 6 September 2021 + +> Also: speaking as someone who's read and enjoyed your LW content, I do hope this isn't a sign that you're going full post-rat. It was bad enough when QC did it (though to his credit QC still has pretty decent Twitter takes, unlike most post-rats). + +https://twitter.com/davidxu90/status/1435106339550740482 +] + + +David Xu writes (with Yudkowsky ["endors[ing] everything [Xu] just said"](https://twitter.com/ESYudkowsky/status/1436025983522381827)): + +> I'm curious what might count for you as a crux about this; candidate cruxes I could imagine include: whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is "rational" to rule that such inferences should be avoided when possible, and if so, whether the best way to disallow a large set of potential inferences is [to] proscribe the use of the categories that facilitate them—and if _not_, whether proscribing the use of a category in _public communication_ constitutes "proscribing" it more generally, in a way that interferes with one's ability to perform "rational" thinking in the privacy of one's own mind. +> +> That's four possible (serial) cruxes I listed, one corresponding to each "whether". + +I reply: on the first and second cruxes, concerning whether some categories facilitate inferences that cause more harm than benefit on the whole and whether they should be avoided when possible, I ask: harm _to whom?_ Not all agents have the same utility function! If some people are harmed by other people making certain probabilistic inferences, then it would seem that there's a _conflict_ between the people harmed (who prefer that such inferences be avoided if possible), and people who want to make and share probabilistic inferences about reality (who think that that which can be destroyed by the truth, should be). + +On the third crux, whether the best way to disallow a large set of potential inferences is to proscribe the use of the categories that facilitate them: well, it's hard to be sure whether it's the _best_ way: no doubt a more powerful intelligence could search over a larger space of possible strategies than me. But yeah, if your goal is to _prevent people from noticing facts about reality_, then preventing them from using words that refer those facts seems like a pretty effective way to do it! + +On the fourth crux, whether proscribing the use of a category in public communication constitutes "proscribing" in a way that interferes with one's ability to think in the privacy of one's own mind: I think this is mostly true for humans. We're social animals. To the extent that we can do higher-grade cognition at all, we do it using our language faculties that are designed for communicating with others. How are you supposed to think about things that you don't have words for? + +Xu continues: + +> I could have included a fifth and final crux about whether, even _if_ The Thing In Question interfered with rational thinking, that might be worth it; but this I suspect you would not concede, and (being a rationalist) it's not something I'm willing to concede myself, so it's not a crux in a meaningful sense between us (or any two self-proclaimed "rationalists"). +> +> My sense is that you have (thus far, in the parts of the public discussion I've had the opportunity to witness) been behaving as though the _one and only crux in play_—that is, the True Source of Disagreement—has been the fifth crux, the thing I refused to include with the others of its kind. Your accusations against the caliphate _only make sense_ if you believe the dividing line between your behavior and theirs is caused by a disagreement as to whether "rational" thinking is "worth it"; as opposed to, say, what kind of prescriptions "rational" thinking entails, and which (if any) of those prescriptions are violated by using a notion of gender (in public, where you do not know in advance who will receive your communications) that does not cause massive psychological damage to some subset of people. +> +> Perhaps it is your argument that all four of the initial cruxes I listed are false; but even if you believe that, it should be within your set of ponderable hypotheses that people might disagree with you about that, and that they might perceive the disagreement to be _about_ that, rather than (say) about whether subscribing to the Blue Tribe view of gender makes them a Bad Rationalist, but That's Okay because it's Politically Convenient. +> +> This is the sense in which I suspect you are coming across as failing to properly Other-model. + +After everything I've been through over the past six years, I'm inclined to think it's not a "disagreement" at all. + +It's a _conflict_. I think what's actually at issue is that, at least in this domain, I want people to tell the truth, and the Caliphate wants people to not tell the truth. This isn't a disagreement about rationality, because telling the truth _isn't_ rational _if you don't want people to know things_. + +At this point, I imagine defenders of the Caliphate are shaking their heads in disappointment at how I'm doubling down on refusing to Other-model. But—_am_ I? Isn't this just a re-statement of Xu's first proposed crux, except reframed as a "values difference" rather than a "disagreement"? + +Is the problem that my use of the phrase "tell the truth" (which has positive valence in our culture) functions to sneak in normative connotations favoring "my side"? + +Fine. Objection sustained. I'm happy to use to Xu's language: I think what's actually at issue is that, at least in this domain, I want to facilitate people making inferences (full stop), and the Caliphate wants to _not_ facilitate people making inferences that, on the whole, cause more harm than benefit. This isn't a disagreement about rationality, because facilitating inferences _isn't_ rational _if you don't want people to make inferences_ (for example, because they cause more harm than benefit). + +Better? Perhaps, to some 2022-era rats and EAs, this formulation makes my position look obviously in the wrong: I'm saying that I'm fine with my inferences _causing more harm than benefit_ (!). Isn't that monstrous of me? Why would someone do that? + +One of the better explanations of this that I know of was (again, as usual) authored by Yudkowsky in 2007, in a post titled ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased). + +The Yudkowsky of 2007 starts by quoting a passage from George Orwell's _1984_, in which O'Brien (a loyal member of the ruling Party in the totalitarian state depicted in the novel) burns a photograph of Jones, Aaronson, and Rutherford (former Party leaders whose existence has been censored from the historical record). Immediately after burning the photograph, O'Brien denies that it ever existed. + +The Yudkowsky of 2007 continues—it's again worth quoting at length— + +> What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—_unhappy?_ Surely, _true_ wisdom would be _second-order_ rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness. +> +> Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen. +> +> [...] +> +> For second-order rationality to be genuinely _rational_, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality. If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting. I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads. +> +> You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception. +> +> The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is not second-order rationality. It is willful stupidity. +> +> [...] +> +> One of chief pieces of advice I give to aspiring rationalists is "Don't try to be clever." And, "Listen to those quiet, nagging doubts." If you don't know, you don't know _what_ you don't know, you don't know how _much_ you don't know, and you don't know how much you _needed_ to know. +> +> There is no second-order rationality. There is only a blind leap into what may or may not be a flaming lava pit. Once you _know_, it will be too late for blindness. + +Looking back on this from 2022, the only criticism I have is that Yudkowsky was too optimistic to "doubt such a lunatic dislocation in the mind could really happen." In some ways, people's actual behavior is _worse_ than what Orwell depicted. The Party of Orwell's _1984_ covers its tracks: O'Brien takes care to burn the photograph _before_ denying memory of it, because it would be _too_ absurd for him to act like the photo had never existed while it was still right there in front of him. + +In contrast, Yudkowsky's Caliphate of the current year _doesn't even bother covering its tracks_. Turns out, it doesn't need to! People just don't remember things! + +The [flexibility of natural language is a _huge_ help here](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). If the caliph were to _directly_ contradict himself in simple, unambiguous language—to go from "Oceania is not at war with Eastasia" to "Oceania is at war with Eastasia" without any acknowledgement that anything had changed—_then_ too many people might notice that those two sentences are the same except that one has the word _not_ in it. What's a caliph to do, if he wants to declare war on Eastasia without acknowledging or taking responsibility for the decision to do so? + +The solution is simple: just—use more words! Then if someone tries to argue that you've _effectively_ contradicted yourself, accuse them of being uncharitable and failing to model the Other. You can't lose! Anything can be consistent with anything if you apply a sufficiently charitable reading; whether Oceania is at war with Eastasia depends on how you choose to draw the category boundaries of "at war." + +Thus, O'Brien should envy Yudkowsky: burning the photograph turns out to be unnecessary! ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) is _still up_ and not retracted, but that didn't stop the Yudkowsky of 2016 from pivoting to ["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228) when that became a politically favorable thing to say. I claim that these posts _effectively_ contradict each other. The former explains why men who fantasize about being women are _not only_ out of luck given forseeable technology, but _also_ that their desires may not even be coherent (!), whereas the latter claims that men who wish they were women may, in fact, _already_ be women in some unspecified psychological sense. + +_Technically_, these don't _strictly_ contradict each other: I can't point to a sentence from each that are the same except one includes the word _not_. (And even if there were such sentences, I wouldn't be able to prove that the other words were being used in the same sense in both sentences.) One _could_ try to argue that "Changing Emotions" is addressing cis men with a weird sex-change fantasy, whereas the "ones with penises are actually women" claim was about trans women, which are a different thing. + +_Realistically_ ... no. These two posts _can't_ both be right. In itself, this isn't a problem: people change their minds sometimes, which is great! But when people _actually_ change their minds (as opposed to merely changing what they say in public for political reasons), you expect them to be able to _acknowledge_ the change, and hopefully explain what new evidence or reasoning brought them around. If they can't even _acknowledge the change_, that's pretty Orwellian, like O'Brien trying to claim that the photograph is of different men who just coincidentally happen to look like Jones, Aaronson, and Rutherford. + +And if a little bit of Orwellianism on specific, narrow, highly-charged topics might be forgiven—because everyone else in your Society is doing it, and you would be punished for not playing along, an [inadequate equilibrium](https://equilibriabook.com/) that no one actor has the power to defy—might we not expect the father of the "rationalists" to stand his ground on the core theses of his ideology, like whether telling the truth is good? + +I guess not! ["Doublethink (Choosing to be Biased)"](https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased) is _still up_ and not retracted, but that didn't stop Yudkowsky from [endorsing everything Xu said](https://twitter.com/ESYudkowsky/status/1436025983522381827) about "whether some categories facilitate inferences that _do_, on the whole, cause more harm than benefit, and if so, whether it is 'rational' to rule that such inferences should be avoided when possible" being different cruxes than "whether 'rational' thinking is 'worth it'". + +I don't doubt Yudkowsky could come up with some clever casuistry why, _technically_, the text he wrote in 2007 and the text he endorsed in 2021 don't contradict each other. But _realistically_ ... again, no. + +[TODO: elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading and consider, not whether individual sentences can be interpreted as "true", but what kind of _optimization_ the text is doing to the behavior of receptive readers] + +On the offhand chance that Eliezer Yudkowsky happens to be reading this—if someone _he_ trusts (MIRI employees?) genuinely thinks it would be good for the lightcone to bring this paragraph to his attention—he should know that if he _wanted_ to win back _some_ of the trust and respect he's lost from me and everyone I can influence—not _all_ of it, but _some_ of it[^some-of-it]—I think it would be really easy. All he would have to do is come clean about the things he's _already_ misled people about. + +[^some-of-it]: Coming clean _after_ someone writes a 70,000 word memoir explaining how dishonest you've been, engenders less trust than coming clean spontenously of your own accord. + +I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's _clearly labeled as such_ would be fine. (There's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive; if you want other perspectives, you'll have to read other authors and think it through for yourself.") + +What's _not_ fine is selective argumentation while claiming "confidence in [your] own ability to independently invent everything important that would be on the other side of the filter and check it [yourself] before speaking" when you _very obviously have done no such thing_. That's _not_ "no commment"! Having _already_ chosen to comment, he can't reasonably expect any self-respecting rationalist to take his "epistemic hero" bluster seriously if he's not going to reply to the obvious objections to have been made with standing and warrant. To wit— + + * Yudkowsky is _on the record_ [claiming that](https://www.facebook.com/yudkowsky/posts/10154078468809228) "for people roughly similar to the Bay Area / European mix", he is "over 50% probability at this point that at least 20% of the ones with penises are actually women". What ... does that mean? What is the _truth condition_ of the word 'woman' in that sentence? This can't be the claim that 20% of males would benefit from a gender transition, and in that sense _become_ "transsexual women"; the claim stated in the post is that members of this group are _already_ "actually women", "female-minds-in-male-bodies". How does Yudkowsky reconcile this claim with the perponderance of male-typical rather than female-typical behavior in this group (_e.g._, in gynephilic sexual orientation, or [in vocational interests](/2020/Nov/survey-data-on-cis-and-trans-women-among-haskell-programmers/))? On the other hand, if Yudkowsky changed his mind and no longer believes that 20% of Bay Area males of European descent have female brains, can he state that for the public record? _Reply!_ + + * Yudkowsky is _on the record_ [claiming that](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421986539228&reply_comment_id=10159423713134228) he "do[es] not know what it feels like from the inside to feel like a pronoun is attached to something in your head much more firmly than 'doesn't look like an Oliver' is attached to something in your head." As I explained in "Challenges to Yudkowsky's Pronoun Reform Proposal", [quoting examples from Yudkowsky's published writing in which he treated sex and pronouns as synonymous just as one would expect a native American English speaker born in 1979 to do](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/#look-like-an-oliver), this self-report is not plausible. The claim may not have been a "lie" _in the sense_ of Yudkowsky consciously harboring deliberative intent to deceive at the time he typed that sentence, but it _is_ a "lie" in the sense that the claim is _false_ and Yudkowsky _knows_ it's false (although its falsehood may not have been salient in the moment of typing the sentence). If Yudkowsky expects people to believe that he never lies, perhaps he could correct this accidental lie after it's been pointed out? _Reply!_ + + * In a comment on his February 2021 Facebook post on pronoun reform, Yudkowsky [claims that](https://www.facebook.com/yudkowsky/posts/pfbid0331sBqRLBrDBM2Se5sf94JurGRTCjhbmrYnKcR4zHSSgghFALLKCdsG6aFbVF9dy9l?comment_id=10159421833274228&reply_comment_id=10159421901809228) "in a half-Kolmogorov-Option environment where [...] you can get away with attaching explicit disclaimers like this one, it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment". Some interesting potential counterevidence to this "not community-harmful" claim comes in the form of [a highly-upvoted (110 karma at press time) comment by _Less Wrong_ administrator Oliver Habryka](https://www.lesswrong.com/posts/juZ8ugdNqMrbX7x2J/challenges-to-yudkowsky-s-pronoun-reform-proposal?commentId=he8dztSuBBuxNRMSY) on the _Less Wrong_ mirror of my rebuttal. Habryka writes: + +> [...] basically everything in this post strikes me as "obviously true" and I had a very similar reaction to what the OP says now, when I first encountered the Eliezer Facebook post that this post is responding to. +> +> And I do think that response mattered for my relationship to the rationality community. I did really feel like at the time that Eliezer was trying to make my map of the world worse, and it shifted my epistemic risk assessment of being part of the community from "I feel pretty confident in trusting my community leadership to maintain epistemic coherence in the presence of adversarial epistemic forces" to "well, I sure have to at least do a lot of straussian reading if I want to understand what people actually believe, and should expect that depending on the circumstances community leaders might make up sophisticated stories for why pretty obviously true things are false in order to not have to deal with complicated political issues". +> +> I do think that was the right update to make, and was overdetermined for many different reasons, though it still deeply saddens me. + +Again, that's the administrator of Yudkowsky's _own website_ saying that he's deeply saddened that he now expects Yudkowsky to _make up sophisticated stories for why pretty obviously true things are false_ (!!). Is that ... _not_ a form of harm to the community? If that's not community-harmful in Yudkowsky's view, then what would be example of something that _would_ be? _Reply, motherfucker!_ + +... but I'm not, holding my breath. If Yudkowsky _wants_ to reply—if he _wants_ to try to win back some of the trust and respect he's lost from me—he's totally _welcome_ to. (_I_ don't censor my comment sections of people whom it "looks like it would be unhedonic to spend time interacting with".) + + +[TODO: I've given up talking to the guy (nearest unblocked strategy sniping in Eliezerfic doesn't count), my last email, giving up on hero-worship I don't want to waste any more of his time. I owe him that much.] + +[TODO: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe ] + +[TODO: the Death With Dignity era + +"Death With Dignity" isn't really an update; he used to refuse to give a probability, and now he says the probability is ~0 + +/2017/Jan/from-what-ive-tasted-of-desire/ + +] + +[TODO: regrets and wasted time] diff --git a/notes/post_ideas.txt b/notes/post_ideas.txt index 589a820..eeb0935 100644 --- a/notes/post_ideas.txt +++ b/notes/post_ideas.txt @@ -5,6 +5,7 @@ _ Reply to Scott Alexander on Autogenderphilia _ Hrunkner Unnerby and the Shallowness of Progress _ Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer _ A Hill of Validity in Defense of Meaning +_ Agreeing With Stalin in Ways that Exhibit Generally Rationalist Principles _ Why I Don't Trust Eliezer Yudkowsky's Intellectual Honesty