-But, well, I thought I had made a pretty convincing that a lot of people are making a correctable and important rationality mistake, such that the cost of a correction (about the philosophy of language specifically, not any possible implications for gender politics) would actually be justified here. If someone had put _this much_ effort into pointing out an error _I_ had made four months or five years ago and making careful arguments for why it was important to get the right answer, I think I _would_ put some serious thought into it.
-
-]
-
-
-
-
-[TODO: We lost?! How could we lose??!!?!? And, post-war concessions ...
-
-curation hopes ... 22 Jun: I'm expressing a little bit of bitterness that a mole rats post got curated https://www.lesswrong.com/posts/fDKZZtTMTcGqvHnXd/naked-mole-rats-a-case-study-in-biological-weirdness
-
-"Univariate fallacy" also a concession
-https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/
-"Yes Requires the Possibility of No" 19 May https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019
-scuffle on LessWrong FAQ 31 May https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for#iqEEme6M2JmZEXYAk
-
-]
-
-Since arguing at the object level had failed (["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), ["Reply on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/)), and arguing at the strictly meta level had failed (["... Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)), the obvious thing to do next was to jump up to the meta-meta level and tell the story about why the "rationalists" were Dead To Me now, that [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) was not being met. (Just like Ben had suggested in December and in April.)
-
-I found it trouble to make progress on. I felt—constrained. I didn't know how to tell the story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise "the rationalists" collectively, and—more philosophy-of-language blogging!
-
-
-[TODO 2019 activities—
-"Schelling Categories" Aug 2019, "Maybe Lying Doesn't Exist" Oct 2019, "Algorithms of Deception!" Oct 2019, "Heads I Win" Sep 2019, "Firming Up ..." Dec 2019
-"epistemic defense" meeting
-
-bitter comments about rationalists—
-https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE
-
-]
-
-[TODO section on factional conflict:
-Michael on Anna as cult leader
-Jessica told me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards)
-24 Aug: I had told Anna about Michael's "enemy combatants" metaphor, and how I originally misunderstood
-me being regarded as Michael's pawn
-assortment of agendas
-mutualist pattern where Michael by himself isn't very useful for scholarship (he just says a lot of crazy-sounding things and refuses to explain them), but people like Sarah and me can write intelligible things that secretly benefited from much less legible conversations with Michael.
-]
-
-[TODO: Yudkowsky throwing NRx under the bus; tragedy of recursive silencing
-15 Sep Glen Weyl apology
-]
-
-
-
-In November, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.
-
-I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for _what_ variables you paid attention to, to be motivated by consequences. But _given_ the subspace that's relevant to your interests, you want to run an epistemically legitimate clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences.
-
-Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues _potentially_ arise if you allow consequentialist concerns to affect your epistemics—
-
-But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley was trying to mess with me.)
-
-Also in November, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist.
-
-The reason it _should_ be safe to write is because Explaining Things is Good. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully _tell the true story_ about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_."
-
-So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it's wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was "the nuclear option" in a way that I couldn't credibly cancel with "But I'm just telling a true story about a thing that was important to me that actually happened" disclaimers. If you're slowly-but-surely gaining territory in a conventional war, _suddenly_ escalating to nukes seems pointlessly destructive. This metaphor is horribly non-normative ([arguing is not a punishment!](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html) carefully telling a true story _about_ an argument is not a nuke!), but I didn't know how to make it stably go away.
-
-A more motivationally-stable compromise would be to try to split off whatever _generalizable insights_ that would have been part of the story into their own posts that don't make it personal. ["Heads I Win, Tails?—Never Heard of Her"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff I was worried about, without making it personal, even if, secretly, it actually was personal.
-
-Ben replied that it didn't seem like it was clear to me that I was a victim of systemic abuse, and that I was trying to figure out whether I was being fair to my abuser. He thought if I could internalize that, I would be able to forgive myself a lot of messiness, which would reduce the perceived complexity of the problem.
-
-I said I would bite that bullet: yes! Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a _lame excuse_.
-
-(This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if that word _ever_ meant anything", and while I agreed that they were pointing to an important way in which things were messed up, I was still sympathetic to the Caliphate-defender's reply that the Vassarite usage of "fraud" was motte-and-baileying between vastly different senses of _fraud_; I wanted to do _more work_ to formulate a _more precise theory_ of the psychology of deception to describe exactly how things are messed up a way that wouldn't be susceptible to the motte-and-bailey charge.)
-
-[TODO: a culture that has gone off the rails; my warning points to Vaniver]
-
-[TODO: plan to reach out to Rick]
-
-[TODO:
-Scott replies on 21 December https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=LJp2PYh3XvmoCgS6E
-
-> since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences
-
-I snapped https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=xEan6oCQFDzWKApt7
-
-Christmas party
-playing on a different chessboard
-people reading funny GPT-2 quotes
-
-A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context.
-
-motivation deflates after Christmas victory
-5 Jan memoir as nuke
-]
-
-
-There's another extremely important part of the story that _would_ fit around here chronologically, but I again find myself constrained by privacy norms: everyone's common sense of decency (this time, even including my own) screams that it's not my story to tell.
-
-Here I again need to make a digression about privacy norms. Adherence to norms is fundamentally fraught for the same reason as AI alignment is. That is, in [rich domains](https://arbital.com/p/rich_domain/), explicit constraints on behavior face a lot of adversarial pressure from optimizers bumping up against the constraint. The intent of privacy norms restricting what things you're allowed to say, is to conceal information. But _information_ in Shannon's sense is about what states of the world can be inferred given the states of communication signals; it's much more expansive that what we would colloquially think of as the "content" of a message.
-
-
-
-[TODO: "Autogenderphilia Is Common"]
-
-[TODO: help from Jessica for "Unnatural Categories"]
-
-[TODO: 2 June, I send an email to Cade Metz, who had DMed me on Twitter
-https://slatestarcodex.com/2020/09/11/update-on-my-situation/
-]
-
-[TODO: "out of patience" email]
-[TODO: Sep 2020 categories clarification from EY—victory?!
-https://www.facebook.com/yudkowsky/posts/10158853851009228
-]
-
-[TODO: briefly mention breakup with Vassar group]
-
-[TODO: "Unnatural Categories Are Optimized for Deception"
-
-Abram was right
-
-the fact that it didn't means that not tracking it can be an effective AI design! Just because evolution takes shortcuts that human engineers wouldn't doesn't mean shortcuts are "wrong" (instead, there are laws governing which kinds of shortcuts work).
-
-Embedded agency means that the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'my own' code." In that light, it makes sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about
-
-somehow accuracy seems more fundamental than power or resources ... could that be formalized?
-]
-
-
-And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satsified. I still published "Unnatural Categories Are Optimized for Deception" in January 2021, but if I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war.
-
-[TODO: NYT affair and Brennan link
-https://astralcodexten.substack.com/p/statement-on-new-york-times-article
-https://reddragdiva.tumblr.com/post/643403673004851200/reddragdiva-topher-brennan-ive-decided-to-say
-https://www.facebook.com/yudkowsky/posts/10159408250519228
-
-]
-
-... except that Yudkowsky reopened the conversation in February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions and concluding that, "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_."
-
-(_Why?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging?)
-
-I explained what's wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), but I find myself still having more left to analyze. The February 2021 post on pronouns is a _fascinating_ document, in its own way—a penetrating case study on the effects of politics on a formerly great mind.
-
-Yudkowsky begins by setting the context of "[h]aving received a bit of private pushback" on his willingness to declare that asking someone to use a different pronoun is not lying.
-
-But ... the _reason_ he got a bit ("a bit") of private pushback was _because_ the original "hill of meaning" thread was so blatantly optimized to intimidate and delegitimize people who want to use language to reason about biological sex. The pushback wasn't about using trans people's preferred pronouns (I do that, too), or about not wanting pronouns to imply sex (sounds fine, if we were in the position of defining a conlang from scratch); the _problem_ is using an argument that's ostensibly about pronouns to sneak in an implicature ("Who competes in sports segregated around an Aristotelian binary is a policy question [ ] that I personally find very humorous") that it's dumb and wrong to want to talk about the sense in which trans women are male and trans men are female, as a _fact about reality_ that continues to be true even if it hurts someone's feelings, and even if policy decisions made on the basis of that fact are not themselves a fact (as if anyone had doubted this).
-
-In that context, it's revealing that in this post attempting to explain why the original thread seemed like a reasonable thing to say, Yudkowsky ... doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size."
-
-But ... I've never _measured_ how big someone's gametes are, have you? We can only _infer_ whether strangers' bodies are configured to produce small or large gametes by observing [a variety of correlated characteristics](https://en.wikipedia.org/wiki/Secondary_sex_characteristic). Furthermore, for trans people who don't pass but are visibly trying to, one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex.
-
-Thus, Yudkowsky's "default for those-who-haven't-asked that goes by gamete size" clause _can't be taken literally_. The only way I can make sense of it is to interpret it as a way to point at the prevailing reality that people are good at noticing what sex other people are, but that we want to be kind to people who are trying to appear to be the other sex, without having to admit to it.
-
-One could argue that this is hostile nitpicking on my part: that the use of "gamete size" as a metonym for sex here is either an attempt to provide an unambiguous definition (because if you said _female_ or _male sex_, someone could ask what you meant by that), or that it's at worst a clunky choice of words, not an intellectually substantive decision that can be usefully critiqued.
-
-But the claim that Yudkowsky is only trying to provide an unambiguous definition isn't consistent with the text's claim that "[i]t would still be logically rude to demand that other people use only your language system and interpretation convention in order to communicate, in advance of them having agreed with you about the clustering thing". And the post also seems to suggest that the motive isn't to avoid ambiguity. Yudkowsky writes:
-
-> In terms of important things? Those would be all the things I've read—from friends, from strangers on the Internet, above all from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate, or perhaps at all.
->
-> And I'm not happy that the very language I use, would try to force me to take a position on that; not a complicated nuanced position, but a binarized position, _simply in order to talk grammatically about people at all_.
-
-What does the "tossed into a bucket" metaphor refer to, though? I can think of many different things that might be summarized that way, and my sympathy for the one who does not like to be tossed into a bucket depends on a lot on exactly what real-world situation is being mapped to the bucket.
-
-If we're talking about overt _gender role enforcement attempts_—things like, "You're a girl, therefore you need to learn to keep house for your future husband", or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket.
-
-(There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt-out of the default buckets, without causing too much trouble.)
-
-But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.5; your expectation that I be able to toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to recommend I get a prostate cancer screening on account of my being male and therefore at risk for prostate cancer, it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that.
-
-While piously appealing to the feelings of people describing reasons they do not want to be tossed into a Male Bucket or a Female Bucket, Yudkowsky does not seem to be distinguishing between reasons that have merit, and reasons that do not have merit. The post continues (bolding mine):
-
-> In a wide variety of cases, sure, ["he" and "she"] can clearly communicate the unambiguous sex and gender of something that has an unambiguous sex and gender, much as a different language might have pronouns that sometimes clearly communicated hair color to the extent that hair color often fell into unambiguous clusters.
->
-> But if somebody's hair color is halfway between two central points? If their civilization has developed stereotypes about hair color they're not comfortable with, such that they feel that the pronoun corresponding to their outward hair color is something they're not comfortable with because they don't fit key aspects of the rest of the stereotype and they feel strongly about that? If they have dyed their hair because of that, or **plan to get hair surgery, or would get hair surgery if it were safer but for now are afraid to do so?** Then it's stupid to try to force people to take complicated positions about those social topics _before they are allowed to utter grammatical sentences_.
-
-So, I agree that a language convention in which pronouns map to hair color doesn't seem great, and that the people in this world should probably coordinate on switching to a better convention, if they can figure out how.
-
-But taking as given the existence of a convention in which pronouns refer to hair color, a demand to be refered to as having a hair color _that one does not in fact have_ seems pretty outrageous to me!
-
-It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of _genuine_ nuance brought on by a _genuine_ challenge to a system that _falsely_ assumes discrete hair colors.
-
-But ... "plan to get hair surgery"? "Would get hair surgery if it were safer but for now are afraid to do so"? In what sense do these cases present a challenge to the discrete system and therefore call for complication and nuance? There's nothing ambiguous about these cases: if you haven't, in fact, changed your hair color, then your hair is, in fact, its original color. The decision to get hair surgery does not _propagate backwards in time_. The decision to get hair surgery cannot be _imported from a counterfactual universe in which it is safer_. People who, today, do not have the hair color that they would prefer, are, today, going to have to deal with that fact _as a fact_.
-
-Is the idea that we want to use the same pronouns for the same person over time, so that if we know someone is going to get hair surgery—they have an appointment with the hair surgeon at this-and-such date—we can go ahead and switch their pronouns in advance? Okay, I can buy that.
-
-But extending that to the "would get hair surgery if it were safer" case is _absurd_. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive for insisting on complication and nuance is as an obfuscation tactic—unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts?
-
-Maybe the problem is easier to see in the context of a non-gender example. [My previous hopeless ideological war—before this one—was against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I _hated_ being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all.
-
-I sometimes describe myself as "gender dysphoric", because our culture doesn't have better widely-understood vocabulary for my beautiful pure sacred self-identity thing, but if we're talking about suffering and emotional distress, my "student dysphoria" was _vastly_ worse than any "gender dysphoria" I've ever felt.
-
-But crucially, my tirades against the Student Bucket described reasons not just that _I didn't like it_, but reasons that the bucket was _actually wrong on the empirical merits_: people can and do learn important things by studying and practicing out of their own curiosity and ambition; the system was _actually in the wrong_ for assuming that nothing you do matters unless you do it on the command of a designated "teacher" while enrolled in a designated "course".
-
-And _because_ my war footing was founded on the empirical merits, I knew that I had to _update_ to the extent that the empirical merits showed that I was in the wrong. In 2010, I took a differential equations class "for fun" at the local community college, expecting to do well and thereby prove that my previous couple years of math self-study had been the equal of any schoolstudent's.
-
-In fact, I did very poorly and scraped by with a _C_. (Subjectively, I felt like I "understood the concepts", and kept getting surprised when that understanding somehow didn't convert into passing quiz scores.) That hurt. That hurt a lot.
-
-_It was supposed to hurt_. One could imagine a Jane Austen character in this situation doubling down on his antagonism to everything school-related, in order to protect himself from being hurt—to protest that the teacher hated him, that the quizzes were unfair, that the answer key must have had a printing error—in short, that he had been right in every detail all along, and that any suggestion otherwise was credentialist propaganda.
-
-I knew better than to behave like that—and to the extent that I was tempted, I retained my ability to notice and snap out of it. My failure _didn't_ mean I had been wrong about everything, that I should humbly resign myself to the Student Bucket forever and never dare to question it again—but it _did_ mean that I had been wrong about _something_. I could [update myself incrementally](https://www.lesswrong.com/posts/627DZcvme7nLDrbZu/update-yourself-incrementally)—but I _did_ need to update. (Probably, that "math" encompasses different subskills, and that my glorious self-study had unevenly trained some skills and not others: there was nothing contradictory about my [successfully generalizing one of the methods in the textbook to arbitrary numbers of variables](https://math.stackexchange.com/questions/15143/does-the-method-for-solving-exact-des-generalize-like-this), while _also_ [struggling with the class's assigned problem sets](https://math.stackexchange.com/questions/7984/automatizing-computational-skills).)
-
-Someone who uncritically validated my not liking to be tossed into the Student Bucket, instead of assessing my _reasons_ for not liking to be tossed into the Bucket and whether those reasons had merit, would be hurting me, not helping me—because in order to navigate the real world, I need a map that reflects the territory, rather than my narcissistic fantasies. I'm a better person for straightforwardly facing the shame of getting a _C_ in community college differential equations, rather than trying to deny it or run away from it or claim that it didn't mean anything. Part of updating myself incrementally was that I would get _other_ chances to prove that my autodidacticism _could_ match the standard set by schools. (My professional and open-source programming career obviously does not owe itself to the two Java courses I took at community college. When I audited honors analysis at UC Berkeley "for fun" in 2017, I did fine on the midterm. When applying for a new dayjob in 2018, the interviewer, noting my lack of a degree, said he was going to give a version of the interview without a computer science theory question. I insisted on being given the "college" version of the interview, solved a dynamic programming problem, and got the job. And so on.)
-
-If you can see why uncritically affirming people's current self-image isn't the right solution to "student dysphoria", it _should_ be obvious why the same is true of gender dysphoria. There's a very general underlying principle, that it matters whether someone's current self-image is actually true.
-
-In an article titled ["Actually, I Was Just Crazy the Whole Time"](https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole), FtMtF detransitioner Michelle Alleva contrasts her beliefs at the time of deciding to transition, with her current beliefs. While transitioning, she accounted for many pieces of evidence about herself ("dislike attention as a female", "obsessive thinking about gender", "didn't fit in with the girls", _&c_.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover everything on the original list: "It's because I'm autistic", "It's because I have unresolved trauma", "It's because women are often treated poorly" ... including "That wasn't entirely true" (!!).
-
-This is a _rationality_ skill. Alleva had a theory about herself, and then she _revised her theory upon further consideration of the evidence_. Beliefs about one's self aren't special and can—must—be updated using the _same_ methods that you would use to reason about anything else—[just as a recursively self-improving AI would reason the same about transistors "inside" the AI and transitors in "the environment."](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection)
-
-(Note, I'm specifically praising the _form_ of the inference, not necessarily the conclusion to detransition. If someone else in different circumstances weighed up the evidence about _them_-self, and concluded that they _are_ trans in some _specific_ objective sense on the empirical merits, that would _also_ be exhibiting the skill. For extremely sex-role-nonconforming same-natal-sex-attracted transsexuals, you can at least see why the "born in the wrong body" story makes some sense as a handwavy [first approximation](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/). It's just that for males like me, and separately for females like Michaell Alleva, the story doesn't add up.)
-
-This also isn't a particularly _advanced_ rationality skill. This is very basic—something novices should grasp during their early steps along the Way.
-
-Back in 'aught-nine, in the early days of _Less Wrong_, when I still hadn't grown out of [my teenage religion of psychological sex differences denialism](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), there was an exchange in the comment section between me and Yudkowsky that still sticks with me. Yudkowsky had claimed that he had ["never known a man with a true female side, and [...] never known a woman with a true male side, either as authors or in real life."](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2) Offended at our leader's sexism, I passive-aggressively [asked him to elaborate](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way?commentId=AEZaakdcqySmKMJYj), and as part of [his response](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF), he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [_sic_]."
-
-[I replied](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b) (bolding added):
-
-> I sometimes wish that certain men would appreciate that not all men are like them—**or at least, that not all men _want_ to be like them—that the fact of masculinity is [not _necessarily_ something to integrate](https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me).**
-
-_I knew_. Even then, _I knew_ I had to qualify my not liking to be tossed into a Male Bucket. I could object to Yudkowsky speaking as if men were a collective with shared normative ideals ("a lifetime's work to integrate"), but I couldn't claim to somehow not be male, or _even_ that people couldn't make probabilistic predictions about me given the fact that I'm male ("the fact of masculinity"), _because that would be crazy_. The culture of early _Less Wrong_ wouldn't have let me get away with that.
-
-It would seem that in the current year, that culture is dead—or at least, if it does have any remaining practitioners, they do not include Eliezer Yudkowsky.
-
-At this point, some people would argue that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post does also explicitly say that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I agree that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not assume that he "really meant" to communicate the reading that does make sense, rather than the one that doesn't make sense?
-
-I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. Yudkowsky is just _too talented of a writer_ for me to excuse his words as an artifact of unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about, I think it's _deliberately_ ambiguous. When smart people act dumb, [it's often wise to conjecture that their behavior represents _optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some other channel than their words straightforwardly reflecting the truth. Someone who was _actually_ stupid wouldn't be able to generate text with a specific balance of insight and selective stupidity fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. The point of the post is to pander to the biological sex denialists in his robot cult, without technically saying anything unambiguously false that someone could point out as a "lie."
-
-Consider the implications of Yudkowsky giving as a clue as to the political forces as play in the form of [a disclaimer comment](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228):
-
-> It unfortunately occurs to me that I must, in cases like these, disclaim that—to the extent there existed sensible opposing arguments against what I have just said—people might be reluctant to speak them in public, in the present social atmosphere. That is, in the logical counterfactual universe where I knew of very strong arguments against freedom of pronouns, I would have probably stayed silent on the issue, as would many other high-profile community members [...]
->
-> This is a filter affecting your evidence; it has not to my own knowledge filtered out a giant valid counterargument that invalidates this whole post. I would have kept silent in that case, for to speak then would have been dishonest.
->
-> Personally, I'm used to operating without the cognitive support of a civilization in controversial domains, and have some confidence in my own ability to independently invent everything important that would be on the other side of the filter and check it myself before speaking. So you know, from having read this, that I checked all the speakable and unspeakable arguments I had thought of, and concluded that this speakable argument would be good on net to publish, as would not be the case if I knew of a stronger but unspeakable counterargument in favor of Gendered Pronouns For Everyone and Asking To Leave The System Is Lying.
->
-> But the existence of a wide social filter like that should be kept in mind; to whatever quantitative extent you don't trust your ability plus my ability to think of valid counterarguments that might exist, as a Bayesian you should proportionally update in the direction of the unknown arguments you speculate might have been filtered out.
-
-So, the explanation of [the problem of political censorship filtering evidence](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) here is great, but the part where Yudkowsky claims "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter" is just _laughable_. My point that _she_ and _he_ have existing meanings that you can't just ignore by fiat given that the existing meanings are _exactly_ what motivate people to ask for new pronouns in the first place is _really obvious_.
-
-Really, it would be _less_ embarassing for Yudkowsky if he were outright lying about having tried to think of counterarguments. The original post isn't _that_ bad if you assume that Yudkowsky was writing off the cuff, that he clearly just _didn't put any effort whatsoever_ into thinking about why someone might disagree. If he _did_ put in the effort—enough that he felt comfortable bragging about his ability to see the other side of the argument—and _still_ ended up proclaiming his "simplest and best protocol" without even so much as _mentioning_ any of its incredibly obvious costs ... that's just _pathetic_. If Yudkowsky's ability to explore the space of arguments is _that_ bad, why would you trust his opinion about _anything_?
-
-The disclaimer comment mentions "speakable and unspeakable arguments"—but what, one wonders, is the boundary of the "speakable"? In response to a commenter mentioning the cost of having to remember pronouns as a potential counterargument, Yudkowsky [offers us another clue](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228&reply_comment_id=10159421871809228):
-
-> People might be able to speak that. A clearer example of a forbidden counterargument would be something like e.g. imagine if there was a pair of experimental studies somehow proving that (a) everybody claiming to experience gender dysphoria was lying, and that (b) they then got more favorable treatment from the rest of society. We wouldn't be able to talk about that. No such study exists to the best of my own knowledge, and in this case we might well hear about it from the other side to whom this is the exact opposite of unspeakable; but that would be an example.
-
-(As an aside, the wording of "we might well hear about it from _the other side_" (emphasis mine) is _very_ interesting, suggesting that the so-called "rationalist" community, is, effectively, a partisan institution, despite its claims to be about advancing the generically human art of systematically correct reasoning.)
-
-I think (a) and (b) _as stated_ are clearly false, so "we" (who?) fortunately aren't losing much by allegedly not being able to speak them. But what about some _similar_ hypotheses, that might be similarly unspeakable for similar reasons?
-
-Instead of (a), consider the claim that (a′) self-reports about gender dysphoria are substantially distorted by [socially-desirable responding tendencies](https://en.wikipedia.org/wiki/Social-desirability_bias)—as a notable and common example, heterosexual males with [sexual fantasies about being female](http://www.annelawrence.com/autogynephilia_&_MtF_typology.html) [often falsely deny or minimize the erotic dimension of their desire to change sex](/papers/blanchard-clemmensen-steiner-social_desirability_response_set_and_systematic_distortion.pdf) (The idea that self-reports can be motivatedly inaccurate without the subject consciously "lying" should not be novel to someone who co-blogged with [Robin Hanson](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain) for years!)
-
-And instead of (b), consider the claim that (b′) transitioning is socially rewarded within particular _subcultures_ (although not Society as a whole), such that many of the same people wouldn't think of themselves as trans or even gender-dysphoric if they lived in a different subculture.
-
-I claim that (a′) and (b′) are _overwhelmingly likely to be true_. Can "we" talk about _that_? Are (a′) and (b′) "speakable", or not? We're unlikely to get clarification from Yudkowsky, but based on the Whole Dumb Story I've been telling you about how I wasted the last six years of my life on this, I'm going to _guess_ that the answer is broadly No: no, "we" can't talk about that. (_I_ can say it, and people can debate me in a private Discord server where the general public isn't looking, but it's not something someone of Yudkowsky's stature can afford to acknowledge.)
-
-But if I'm right that (a′) and (b′) should be live hypotheses and that Yudkowsky would consider them "unspeakable", that means "we" can't talk about what's _actually going on_ with gender dysphoria and transsexuality, which puts the whole discussion in a different light. In another comment, Yudkowsky lists some gender-transition interventions he named the [November 2018 "hill of meaning in defense of validity" Twitter thread](https://twitter.com/ESYudkowsky/status/1067183500216811521)—using a different bathroom, changing one's name, asking for new pronouns, and getting sex reassignment surgery—and notes that none of these are calling oneself a "woman". [He continues](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421986539228&reply_comment_id=10159424960909228):
-
-> [Calling someone a "woman"] _is_ closer to the right sort of thing _ontologically_ to be true or false. More relevant to the current thread, now that we have a truth-bearing sentence, we can admit of the possibility of using our human superpower of language to _debate_ whether this sentence is indeed true or false, and have people express their nuanced opinions by uttering this sentence, or perhaps a more complicated sentence using a bunch of caveats, or maybe using the original sentence uncaveated to express their belief that this is a bad place for caveats. Policies about who uses what bathroom also have consequences and we can debate the goodness or badness (not truth or falsity) of those policies, and utter sentences to declare our nuanced or non-nuanced position before or after that debate.
->
-> Trying to pack all of that into the pronouns you'd have to use in step 1 is the wrong place to pack it.
-
-Sure, _if we were in the position of designing a constructed language from scratch_ under current social conditions in which a person's "gender" is understood as a contested social construct, rather than their sex being an objective and undisputed fact, then yeah: in that situation _which we are not in_, you definitely wouldn't want to pack sex or gender into pronouns. But it's a disingenuous derailing tactic to grandstand about how people need to alter the semantics of their _already existing_ native language so that we can discuss the real issues under an allegedly superior pronoun convention when, _by your own admission_, you have _no intention whatsoever of discussing the real issues!_
-
-(Lest the "by your own admission" clause seem too accusatory, I should note that given constant behavior, admitting it is _much_ better than not-admitting it; so, huge thanks to Yudkowsky for the transparency on this point!)
-
-Again, as discussed in "Challenges to Yudkowsky's Pronoun Reform Proposal", a comparison to [the _tú_/_usted_ distinction](https://en.wikipedia.org/wiki/Spanish_personal_pronouns#T%C3%BA/vos_and_usted) is instructive. It's one thing to advocate for collapsing the distinction and just settling on one second-person singular pronoun for the Spanish language. That's principled.
-
-It's quite another thing altogether to _simultaneously_ try to prevent a speaker from using _tú_ to indicate disrespect towards a social superior (on the stated rationale that the _tú_/_usted_ distinction is dumb and shouldn't exist), while _also_ refusing to entertain or address the speaker's arguments explaining _why_ they think their interlocutor is unworthy of the deference that would be implied by _usted_ (because such arguments are "unspeakable" for political reasons). That's just psychologically abusive.