-[TODO: pandemic starts]
-
-[TODO: "Autogenderphilia Is Common" https://slatestarcodex.com/2020/02/10/autogenderphilia-is-common-and-not-especially-related-to-transgender/]
-
-[TODO: help from Jessica for "Unnatural Categories"]
-
-[TODO: 2 June, I send an email to Cade Metz, who had DMed me on Twitter
-https://slatestarcodex.com/2020/09/11/update-on-my-situation/
-]
-
-[TODO: "out of patience" email]
-
-> To: Eliezer Yudkowsky <[redacted]>
-> Cc: Anna Salamon <[redacted]>
-> Date: Sunday 13 September 2020 2:24 _a.m._
-> Subject: out of patience
->
->> "I could beg you to do it in order to save me. I could beg you to do it in order to avert a national disaster. But I won't. These may not be valid reasons. There is only one reason: you must say it, because it is true."
->> —_Atlas Shrugged_ by Ayn Rand
->
-> Dear Eliezer (cc Anna as mediator):
->
-> Sorry, I'm getting _really really_ impatient (maybe you saw my impulsive Tweet-replies today; and I impulsively called Anna today; and I've spent the last few hours drafting an even more impulsive hysterical-and-shouty potential _Less Wrong_ post; but now I'm impulsively deciding to email you in the hopes that I can withhold the hysterical-and-shouty post in favor of a lower-drama option of your choice): **is there _any_ way we can resolve the categories dispute _in public_?! Not** any object-level gender stuff which you don't and shouldn't care about, **_just_ the philosophy-of-language part.**
->
-> My grievance against you is *very* simple. [You are *on the public record* claiming that](https://twitter.com/ESYudkowsky/status/1067198993485058048):
->
->> you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.
->
-> I claim that this is _false_. **I think I _am_ standing in defense of truth when I insist on a word, brought explicitly into question, being used with some particular meaning, when I have an _argument_ for _why_ my preferred usage does a better job of "carving reality at the joints" and the one bringing my usage into question doesn't have such an argument. And in particular, "This word usage makes me sad" doesn't count as a relevant argument.** I [agree that words don't have intrinsic ontologically-basic meanings](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution), but precisely _because_ words don't have intrinsic ontologically-basic meanings, there's no _reason_ to challenge someone's word usage except _because_ of the hidden probabilistic inference it embodies.
->
-> Imagine one day David Gerard of /r/SneerClub said, "Eliezer Yudkowsky is a white supremacist!" And you replied: "No, I'm not! That's a lie." And imagine E.T. Jaynes was still alive and piped up, "You are _ontologcially confused_ if you think that's a false assertion. You're not standing in defense of truth if you insist on words, such _white supremacist_, brought explicitly into question, being used with some particular meaning." Suppose you emailed Jaynes about it, and he brushed you off with, "But I didn't _say_ you were a white supremacist; I was only targeting a narrow ontology error." In this hypothetical situation, I think you might be pretty upset—perhaps upset enough to form a twenty-one month grudge against someone whom you used to idolize?
->
-> I agree that pronouns don't have the same function as ordinary nouns. However, **in the English language as actually spoken by native speakers, I think that gender pronouns _do_ have effective "truth conditions" _as a matter of cognitive science_.** If someone said, "Come meet me and my friend at the mall; she's really cool and you'll like her", and then that friend turned out to look like me, **you would be surprised**.
->
-> I don't see the _substantive_ difference between "You're not standing in defense of truth (...)" and "I can define a word any way I want." [...]
->
-> [...]
->
-> As far as your public output is concerned, it *looks like* you either changed your mind about how the philosophy of language works, or you think gender is somehow an exception. If you didn't change your mind, and you don't think gender is somehow an exception, is there some way we can _get that on the public record **somewhere**?!_
->
-> As an example of such a "somewhere", I had asked you for a comment on my explanation, ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (with non-politically-hazardous examples about dolphins and job titles) [... redacted ...] I asked for a comment from Anna, and at first she said that she would need to "red team" it first (because of the political context), and later she said that she was having difficulty for other reasons. Okay, the clarification doesn't have to be on _my_ post. **I don't care about credit! I don't care whether or not anyone is sorry! I just need this _trivial_ thing settled in public so that I can stop being in pain and move on with my life.**
->
-> As I mentioned in my Tweets today, I have a longer and better explanation than "... Boundaries?" mostly drafted. (It's actually somewhat interesting; the logarithmic score doesn't work as a measure of category-system goodness because it can only reward you for the probability you assign to the _exact_ answer, but we _want_ "partial credit" for almost-right answers, so the expected squared error is actually better here, contrary to what you said in [the "Technical Explanation"](https://yudkowsky.net/rational/technical/) about what Bayesian statisticians do). [... redacted]
->
-> The *only* thing I've been trying to do for the past twenty-one months
-is make this simple thing established "rationalist" knowledge:
->
-> (1) For all nouns _N_, you can't define _N_ any way you want, [for at least 37 reasons](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
->
-> (2) *Woman* is such a noun.
->
-> (3) Therefore, you can't define the word *woman* any way you want.
->
-> (Note, **this is _totally compatible_ with the claim that trans women are women, and trans men are men, and nonbinary people are nonbinary!** It's just that **you have to _argue_ for why those categorizations make sense in the context you're using the word**, rather than merely asserting it with an appeal to arbitrariness.)
->
-> This is **literally _modus ponens_**. I don't understand how you expect people to trust you to save the world with a research community that _literally cannot perform modus ponens._
->
-> [redacted ...] See, I thought you were playing on the chessboard of _being correct about rationality_. Such that, if you accidentally mislead people about your own philosophy of language, you could just ... issue a clarification? I and Michael and Ben and Sarah and [redacted] _and Jessica_ wrote to you about this and explained the problem in _painstaking_ detail, **and you stonewalled us.** Why? **Why is this so hard?!**
->
-> [redacted]
->
-> No. The thing that's been driving me nuts for twenty-one months is that <strong><em><span style="color: #F00000;">I expected Eliezer Yudkowsky to tell the truth</span></strong></em>. I remain,
->
-> Your heartbroken student,
-> [...]
-
-I followed it up with another email after I woke up the next morning:
-
-> To: Eliezer Yudkowsky <[redacted]>
-> Cc: Anna Salamon <[redacted]>
-> Date: Sunday 13 September 2020 11:02 _a.m._
-> Subject: Re: out of patience
->
-> [... redacted] The sinful and corrupted part wasn't the _initial_ Tweets; the sinful and corrupted part is this **bullshit stonewalling** when your Twitter followers and me and Michael and Ben and Sarah and [redacted] and Jessica tried to point out the problem. I've _never_ been arguing against your private universe [... redacted]; the thing I'm arguing against in ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (and **my [unfinished draft sequel](https://github.com/zackmdavis/Category_War/blob/cefa98c3abe/unnatural_categories_are_optimized_for_deception.md)**, although that's more focused on what Scott wrote) is the **_actual text_ you _actually published_, not your private universe.**
->
-> [... redacted] you could just **publicly clarify your position on the philosophy of language** the way an intellectually-honest person would do if they wanted their followers to have correct beliefs about the philosophy of language?!
->
-> You wrote:
->
->> [Using language in a way](https://twitter.com/ESYudkowsky/status/1067291243728650243) _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying.
->
->> [Now, maybe as a matter of policy](https://twitter.com/ESYudkowsky/status/1067294823000887297), you want to make a case for language being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to.
->
-> The problem with "it's a policy debate about how to use language" is that it completely elides the issue that some ways of using language _perform better_ at communicating information, such that **attempts to define new words or new senses of _existing_ words should come with a justification for why the new sense is _useful for conveying information_, and that _is_ a matter of Truth.** Without such a justification, it's hard to see why you would _want_ to redefine a word _except_ to mislead people with strategic equivocation.
->
-> It is _literally true_ that Eliezer Yudkowsky is a white supremacist (if I'm allowed to define "white supremacist" to include "someone who [once linked to the 'Race and intelligence' _Wikipedia_ page](https://www.lesswrong.com/posts/faHbrHuPziFH7Ef7p/why-are-individual-iq-differences-ok) in a context that implied that it's an empirical question").
->
-> It is _literally true_ that 2 + 2 = 6 (if I'm allowed to define '2' as •••-many).
->
-> You wrote:
->
->> [The more technology advances, the further](https://twitter.com/ESYudkowsky/status/1067490362225156096) we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts.
->
-> That's kind of like defining Solomonoff induction, and then saying, "Having said this, we've built AGI." No, you haven't said all the facts! Configuration space is _very high-dimensional_; we don't have _access_ to the individual points. Trying to specify the individual points ("say all the facts") would be like what you wrote about in ["Empty Labels"](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)—"not just that I can vary the label, but that I can get along just fine without any label at all." Since that's not possible, we need to group points into the space together so that we can use observations from the coordinates that we _have_ observed to make probabilistic inferences about the coordinates we haven't. But there are _mathematical laws_ governing how well different groupings perform, and those laws _are_ a matter of Truth, not a mere policy debate.
->
-> [... redacted ...]
->
-> But if behavior at equilibrium isn't deceptive, there's just _no such thing as deception_; I wrote about this on Less Wrong in ["Maybe Lying Can't Exist?!"](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) (drawing on the academic literature about sender–reciever games). I don't think you actually want to bite that bullet?
->
-> **In terms of information transfer, there is an isomorphism between saying "I reserve the right to lie 5% of the time about whether something is a member of category C" and adopting a new definition of C that misclassifies 5% of instances with respect to the old definition.**
->
-> Like, I get that you're ostensibly supposed to be saving the world and you don't want randos yelling at you in your email about philosophy. But **I thought the idea was that we were going to save the world [_by means of_ doing unusually clear thinking?](https://arbital.greaterwrong.com/p/executable_philosophy)**
->
-> [Scott wrote](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (with an irrelevant object-level example redacted): "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life." (Okay, he added a clarification after I spent Christmas yelling at him; but I think he's still substantially confused in ways that I address in my forthcoming draft post.)
->
-> [You wrote](https://twitter.com/ESYudkowsky/status/1067198993485058048): "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning."
->
-> I think I've argued pretty extensively this is wrong! **I'm eager to hear counterarguments if you think I'm getting the philosophy wrong.** But ... **"people live in different private universes" is _not a counterargument_.**
->
-> **It makes sense that you don't want to get involved in gender politics. That's why I wrote "... Boundaries?" using examples about dolphins and job titles, and why my forthcoming post has examples about bleggs and artificial meat.** This shouldn't be _expensive_ to clear up?! This should take like, five minutes? (I've spent twenty-one months of my life on this.) Just one little _ex cathedra_ comment on Less Wrong or _somewhere_ (**it doesn't have to be my post, if it's too long or I don't deserve credit or whatever**; I just think the right answer needs to be public) affirming that you haven't changed your mind about 37 Ways Words Can Be Wrong? Unless you _have_ changed your mind, of course?
->
-> I can imagine someone observing this conversation objecting, "[...] why are you being so greedy? We all know the _real_ reason you want to clear up this philosophy thing in public is because it impinges on your gender agenda, but Eliezer _already_ threw you a bone with the ['there's probably more than one type of dypshoria' thing.](https://twitter.com/ESYudkowsky/status/1108277090577600512) That was already a huge political concession to you! That makes you _more_ than even; you should stop being greedy and leave Eliezer alone."
->
-> But as [I explained in my reply](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) criticizing why I think that argument is _wrong_, the whole mindset of public-arguments-as-political-favors is _crazy_. **The fact that we're having this backroom email conversation at all (instead of just being correct about the philosophy of language on Twitter) is _corrupt_!** I don't want to strike a deal in a political negotiation; I want _shared maps that reflect the territory_. I thought that's what this "rationalist community" thing was supposed to do? Is that not a thing anymore? If we can't do the shared-maps thing when there's any hint of political context (such that now you _can't_ clarify the categories thing, even as an abstract philosophy issue about bleggs, because someone would construe that as taking a side on whether trans people are Good or Bad), that seems really bad for our collective sanity?! (Where collective sanity is potentially useful for saving the world, but is at least a quality-of-life improver if we're just doomed to die in 15 years no matter what.)
->
-> **I really used to look up to you.** In my previous interactions with you, I've been tightly [cognitively constrained](http://www.hpmor.com/chapter/57) by hero-worship. I was already so starstruck that _Eliezer Yudkowsky knows who I am_, that the possibility that _Eliezer Yudkowsky might disapprove of me_, was too terrifying to bear. I really need to get over that, because it's bad for me, and [it's _really_ bad for you](https://www.lesswrong.com/posts/cgrvvp9QzjiFuYwLi/high-status-and-stupidity-why). I remain,
->
-> Your heartbroken student,
-> [...]
-
-
-
-[TODO: Sep 2020 categories clarification from EY—victory?!
-https://www.facebook.com/yudkowsky/posts/10158853851009228
-_ex cathedra_ statement that gender categories are not an exception to the rule, only 1 year and 8 months after asking for it
-]
-
-[TODO: Sasha disaster, breakup with Vassar group]
-
-[TODO: "Unnatural Categories Are Optimized for Deception"
-
-Abram was right
-
-the fact that it didn't means that not tracking it can be an effective AI design! Just because evolution takes shortcuts that human engineers wouldn't doesn't mean shortcuts are "wrong" (instead, there are laws governing which kinds of shortcuts work).
-
-Embedded agency means that the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'my own' code." In that light, it makes sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about
-
-somehow accuracy seems more fundamental than power or resources ... could that be formalized?
-]
-
-And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satsified. I still published "Unnatural Categories Are Optimized for Deception" in January 2021, but if I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war.
-
-[TODO: NYT affair and Brennan link
-https://astralcodexten.substack.com/p/statement-on-new-york-times-article
-https://reddragdiva.tumblr.com/post/643403673004851200/reddragdiva-topher-brennan-ive-decided-to-say
-https://www.facebook.com/yudkowsky/posts/10159408250519228
-
-Scott Aaronson on the Times's hit piece of Scott Alexander—
-https://scottaaronson.blog/?p=5310
-> The trouble with the NYT piece is not that it makes any false statements, but just that it constantly insinuates nefarious beliefs and motives, via strategic word choices and omission of relevant facts that change the emotional coloration of the facts that it does present.
-
-]
-
-... except that Yudkowsky reopened the conversation in February 2021, with [a new Facebook post](https://www.facebook.com/yudkowsky/posts/10159421750419228) explaining the origins of his intuitions about pronoun conventions and concluding that, "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he", with a default for those-who-haven't-asked that goes by gamete size' and to say that this just _is_ the normative definition. Because it is _logically rude_, not just socially rude, to try to bake any other more complicated and controversial definition _into the very language protocol we are using to communicate_."
-
-(_Why?_ Why reopen the conversation, from the perspective of his chessboard? Wouldn't it be easier to just stop digging?)
-
-I explained what's wrong with Yudkowsky's new arguments at the length of 12,000 words in March 2022's ["Challenges to Yudkowsky's Pronoun Reform Proposal"](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/), but I find myself still having more left to analyze. The February 2021 post on pronouns is a _fascinating_ document, in its own way—a penetrating case study on the effects of politics on a formerly great mind.
-
-Yudkowsky begins by setting the context of "[h]aving received a bit of private pushback" on his willingness to declare that asking someone to use a different pronoun is not lying.
-
-But ... the _reason_ he got a bit ("a bit") of private pushback was _because_ the original "hill of meaning" thread was so blatantly optimized to intimidate and delegitimize people who want to use language to reason about biological sex. The pushback wasn't about using trans people's preferred pronouns (I do that, too), or about not wanting pronouns to imply sex (sounds fine, if we were in the position of defining a conlang from scratch); the _problem_ is using an argument that's ostensibly about pronouns to sneak in an implicature ("Who competes in sports segregated around an Aristotelian binary is a policy question [ ] that I personally find very humorous") that it's dumb and wrong to want to talk about the sense in which trans women are male and trans men are female, as a _fact about reality_ that continues to be true even if it hurts someone's feelings, and even if policy decisions made on the basis of that fact are not themselves a fact (as if anyone had doubted this).
-
-In that context, it's revealing that in this post attempting to explain why the original thread seemed like a reasonable thing to say, Yudkowsky ... doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size."
-
-But ... I've never _measured_ how big someone's gametes are, have you? We can only _infer_ whether strangers' bodies are configured to produce small or large gametes by observing [a variety of correlated characteristics](https://en.wikipedia.org/wiki/Secondary_sex_characteristic). Furthermore, for trans people who don't pass but are visibly trying to, one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex.
-
-Thus, Yudkowsky's "default for those-who-haven't-asked that goes by gamete size" clause _can't be taken literally_. The only way I can make sense of it is to interpret it as a way to point at the prevailing reality that people are good at noticing what sex other people are, but that we want to be kind to people who are trying to appear to be the other sex, without having to admit to it.
-
-One could argue that this is hostile nitpicking on my part: that the use of "gamete size" as a metonym for sex here is either an attempt to provide an unambiguous definition (because if you said _female_ or _male sex_, someone could ask what you meant by that), or that it's at worst a clunky choice of words, not an intellectually substantive decision that can be usefully critiqued.
-
-But the claim that Yudkowsky is only trying to provide an unambiguous definition isn't consistent with the text's claim that "[i]t would still be logically rude to demand that other people use only your language system and interpretation convention in order to communicate, in advance of them having agreed with you about the clustering thing". And the post also seems to suggest that the motive isn't to avoid ambiguity. Yudkowsky writes:
-
-> In terms of important things? Those would be all the things I've read—from friends, from strangers on the Internet, above all from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate, or perhaps at all.
->
-> And I'm not happy that the very language I use, would try to force me to take a position on that; not a complicated nuanced position, but a binarized position, _simply in order to talk grammatically about people at all_.
-
-What does the "tossed into a bucket" metaphor refer to, though? I can think of many different things that might be summarized that way, and my sympathy for the one who does not like to be tossed into a bucket depends on a lot on exactly what real-world situation is being mapped to the bucket.
-
-If we're talking about overt _gender role enforcement attempts_—things like, "You're a girl, therefore you need to learn to keep house for your future husband", or "You're a man, therefore you need to toughen up"—then indeed, I strongly support people who don't want to be tossed into that kind of bucket.
-
-(There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt-out of the default buckets, without causing too much trouble.)
-
-But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.4; your expectation that I be able to toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to recommend I get a prostate cancer screening on account of my being male and therefore at risk for prostate cancer, it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that.
-
-While piously appealing to the feelings of people describing reasons they do not want to be tossed into a Male Bucket or a Female Bucket, Yudkowsky does not seem to be distinguishing between reasons that have merit, and reasons that do not have merit. The post continues (bolding mine):
-
-> In a wide variety of cases, sure, ["he" and "she"] can clearly communicate the unambiguous sex and gender of something that has an unambiguous sex and gender, much as a different language might have pronouns that sometimes clearly communicated hair color to the extent that hair color often fell into unambiguous clusters.
->
-> But if somebody's hair color is halfway between two central points? If their civilization has developed stereotypes about hair color they're not comfortable with, such that they feel that the pronoun corresponding to their outward hair color is something they're not comfortable with because they don't fit key aspects of the rest of the stereotype and they feel strongly about that? If they have dyed their hair because of that, or **plan to get hair surgery, or would get hair surgery if it were safer but for now are afraid to do so?** Then it's stupid to try to force people to take complicated positions about those social topics _before they are allowed to utter grammatical sentences_.
-
-So, I agree that a language convention in which pronouns map to hair color doesn't seem great, and that the people in this world should probably coordinate on switching to a better convention, if they can figure out how.
-
-But taking as given the existence of a convention in which pronouns refer to hair color, a demand to be refered to as having a hair color _that one does not in fact have_ seems pretty outrageous to me!
-
-It makes sense to object to the convention forcing a binary choice in the "halfway between two central points" case. That's an example of _genuine_ nuance brought on by a _genuine_ challenge to a system that _falsely_ assumes discrete hair colors.
-
-But ... "plan to get hair surgery"? "Would get hair surgery if it were safer but for now are afraid to do so"? In what sense do these cases present a challenge to the discrete system and therefore call for complication and nuance? There's nothing ambiguous about these cases: if you haven't, in fact, changed your hair color, then your hair is, in fact, its original color. The decision to get hair surgery does not _propagate backwards in time_. The decision to get hair surgery cannot be _imported from a counterfactual universe in which it is safer_. People who, today, do not have the hair color that they would prefer, are, today, going to have to deal with that fact _as a fact_.
-
-Is the idea that we want to use the same pronouns for the same person over time, so that if we know someone is going to get hair surgery—they have an appointment with the hair surgeon at this-and-such date—we can go ahead and switch their pronouns in advance? Okay, I can buy that.
-
-But extending that to the "would get hair surgery if it were safer" case is _absurd_. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive for insisting on complication and nuance is as an obfuscation tactic—
-
-Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts?
-
-Maybe the problem is easier to see in the context of a non-gender example. [My previous hopeless ideological war—before this one—was against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I _hated_ being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all.
-
-I sometimes describe myself as "gender dysphoric", because our culture doesn't have better widely-understood vocabulary for my beautiful pure sacred self-identity thing, but if we're talking about suffering and emotional distress, my "student dysphoria" was _vastly_ worse than any "gender dysphoria" I've ever felt.
-
-But crucially, my tirades against the Student Bucket described reasons not just that _I didn't like it_, but reasons that the bucket was _actually wrong on the empirical merits_: people can and do learn important things by studying and practicing out of their own curiosity and ambition; the system was _actually in the wrong_ for assuming that nothing you do matters unless you do it on the command of a designated "teacher" while enrolled in a designated "course".
-
-And _because_ my war footing was founded on the empirical merits, I knew that I had to _update_ to the extent that the empirical merits showed that I was in the wrong. In 2010, I took a differential equations class "for fun" at the local community college, expecting to do well and thereby prove that my previous couple years of math self-study had been the equal of any schoolstudent's.
-
-In fact, I did very poorly and scraped by with a _C_. (Subjectively, I felt like I "understood the concepts", and kept getting surprised when that understanding somehow didn't convert into passing quiz scores.) That hurt. That hurt a lot.
-
-_It was supposed to hurt_. One could imagine a Jane Austen character in this situation doubling down on his antagonism to everything school-related, in order to protect himself from being hurt—to protest that the teacher hated him, that the quizzes were unfair, that the answer key must have had a printing error—in short, that he had been right in every detail all along, and that any suggestion otherwise was credentialist propaganda.
-
-I knew better than to behave like that—and to the extent that I was tempted, I retained my ability to notice and snap out of it. My failure _didn't_ mean I had been wrong about everything, that I should humbly resign myself to the Student Bucket forever and never dare to question it again—but it _did_ mean that I had been wrong about _something_. I could [update myself incrementally](https://www.lesswrong.com/posts/627DZcvme7nLDrbZu/update-yourself-incrementally)—but I _did_ need to update. (Probably, that "math" encompasses different subskills, and that my glorious self-study had unevenly trained some skills and not others: there was nothing contradictory about my [successfully generalizing one of the methods in the textbook to arbitrary numbers of variables](https://math.stackexchange.com/questions/15143/does-the-method-for-solving-exact-des-generalize-like-this), while _also_ [struggling with the class's assigned problem sets](https://math.stackexchange.com/questions/7984/automatizing-computational-skills).)
-
-Someone who uncritically validated my not liking to be tossed into the Student Bucket, instead of assessing my _reasons_ for not liking to be tossed into the Bucket and whether those reasons had merit, would be hurting me, not helping me—because in order to navigate the real world, I need a map that reflects the territory, rather than my narcissistic fantasies. I'm a better person for straightforwardly facing the shame of getting a _C_ in community college differential equations, rather than trying to deny it or run away from it or claim that it didn't mean anything. Part of updating myself incrementally was that I would get _other_ chances to prove that my autodidacticism _could_ match the standard set by schools. (My professional and open-source programming career obviously does not owe itself to the two Java courses I took at community college. When I audited honors analysis at UC Berkeley "for fun" in 2017, I did fine on the midterm. When applying for a new dayjob in 2018, the interviewer, noting my lack of a degree, said he was going to give a version of the interview without a computer science theory question. I insisted on being given the "college" version of the interview, solved a dynamic programming problem, and got the job. And so on.)
-
-If you can see why uncritically affirming people's current self-image isn't the right solution to "student dysphoria", it _should_ be obvious why the same is true of gender dysphoria. There's a very general underlying principle, that it matters whether someone's current self-image is actually true.
-
-In an article titled ["Actually, I Was Just Crazy the Whole Time"](https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole), FtMtF detransitioner Michelle Alleva contrasts her beliefs at the time of deciding to transition, with her current beliefs. While transitioning, she accounted for many pieces of evidence about herself ("dislike attention as a female", "obsessive thinking about gender", "didn't fit in with the girls", _&c_.) in terms of the theory "It's because I'm trans." But now, Alleva writes, she thinks she has a variety of better explanations that, all together, cover everything on the original list: "It's because I'm autistic", "It's because I have unresolved trauma", "It's because women are often treated poorly" ... including "That wasn't entirely true" (!!).
-
-This is a _rationality_ skill. Alleva had a theory about herself, and then she _revised her theory upon further consideration of the evidence_. Beliefs about one's self aren't special and can—must—be updated using the _same_ methods that you would use to reason about anything else—[just as a recursively self-improving AI would reason the same about transistors "inside" the AI and transitors in "the environment."](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection)
-
-(Note, I'm specifically praising the _form_ of the inference, not necessarily the conclusion to detransition. If someone else in different circumstances weighed up the evidence about _them_-self, and concluded that they _are_ trans in some _specific_ objective sense on the empirical merits, that would _also_ be exhibiting the skill. For extremely sex-role-nonconforming same-natal-sex-attracted transsexuals, you can at least see why the "born in the wrong body" story makes some sense as a handwavy [first approximation](/2022/Jul/the-two-type-taxonomy-is-a-useful-approximation-for-a-more-detailed-causal-model/). It's just that for males like me, and separately for females like Michalle Alleva, the story doesn't add up.)
-
-This also isn't a particularly _advanced_ rationality skill. This is very basic—something novices should grasp during their early steps along the Way.
-
-Back in 'aught-nine, in the early days of _Less Wrong_, when I still hadn't grown out of [my teenage religion of psychological sex differences denialism](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), there was an exchange in the comment section between me and Yudkowsky that still sticks with me. Yudkowsky had claimed that he had ["never known a man with a true female side, and [...] never known a woman with a true male side, either as authors or in real life."](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2) Offended at our leader's sexism, I passive-aggressively [asked him to elaborate](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way?commentId=AEZaakdcqySmKMJYj), and as part of [his response](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF), he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [_sic_]."
-
-[I replied](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b) (bolding added):
-
-> I sometimes wish that certain men would appreciate that not all men are like them—**or at least, that not all men _want_ to be like them—that the fact of masculinity is [not _necessarily_ something to integrate](https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me).**
-
-_I knew_. Even then, _I knew_ I had to qualify my not liking to be tossed into a Male Bucket. I could object to Yudkowsky speaking as if men were a collective with shared normative ideals ("a lifetime's work to integrate"), but I couldn't claim to somehow not be male, or _even_ that people couldn't make probabilistic predictions about me given the fact that I'm male ("the fact of masculinity"), _because that would be crazy_. The culture of early _Less Wrong_ wouldn't have let me get away with that.
-
-It would seem that in the current year, that culture is dead—or at least, if it does have any remaining practitioners, they do not include Eliezer Yudkowsky.
-
-At this point, some people would argue that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post does _also_ explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I _agree_ that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he "really meant" to communicate the reading that does make sense, rather than the one that doesn't make sense?
-
-I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just _too talented of a writer_ for me to excuse his words as an accidental artifact of unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about, I think it's _deliberately_ ambiguous. When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some other channel than their words straightforwardly reflecting the truth. Someone who was _actually_ stupid wouldn't be able to generate text with a specific balance of insight and selective stupidity fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point of the post is to pander to the biological sex denialists in his robot cult, without technically saying anything unambiguously false that someone could point out as a "lie."
-
-Consider the implications of Yudkowsky giving as a clue as to the political forces as play in the form of [a disclaimer comment](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228):
-
-> It unfortunately occurs to me that I must, in cases like these, disclaim that—to the extent there existed sensible opposing arguments against what I have just said—people might be reluctant to speak them in public, in the present social atmosphere. That is, in the logical counterfactual universe where I knew of very strong arguments against freedom of pronouns, I would have probably stayed silent on the issue, as would many other high-profile community members, and only Zack M. Davis would have said anything where you could hear it.
->
-> This is a filter affecting your evidence; it has not to my own knowledge filtered out a giant valid counterargument that invalidates this whole post. I would have kept silent in that case, for to speak then would have been dishonest.
->
-> Personally, I'm used to operating without the cognitive support of a civilization in controversial domains, and have some confidence in my own ability to independently invent everything important that would be on the other side of the filter and check it myself before speaking. So you know, from having read this, that I checked all the speakable and unspeakable arguments I had thought of, and concluded that this speakable argument would be good on net to publish, as would not be the case if I knew of a stronger but unspeakable counterargument in favor of Gendered Pronouns For Everyone and Asking To Leave The System Is Lying.
->
-> But the existence of a wide social filter like that should be kept in mind; to whatever quantitative extent you don't trust your ability plus my ability to think of valid counterarguments that might exist, as a Bayesian you should proportionally update in the direction of the unknown arguments you speculate might have been filtered out.
-
-So, the explanation of [the problem of political censorship filtering evidence](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) here is great, but the part where Yudkowsky claims "confidence in [his] own ability to independently invent everything important that would be on the other side of the filter" is just _laughable_. My point that _she_ and _he_ have existing meanings that you can't just ignore by fiat given that the existing meanings are _exactly_ what motivate people to ask for new pronouns in the first place is _really obvious_.
-
-Really, it would be _less_ embarassing for Yudkowsky if he were outright lying about having tried to think of counterarguments. The original post isn't _that_ bad if you assume that Yudkowsky was writing off the cuff, that he clearly just _didn't put any effort whatsoever_ into thinking about why someone might disagree. If he _did_ put in the effort—enough that he felt comfortable bragging about his ability to see the other side of the argument—and _still_ ended up proclaiming his "simplest and best protocol" without even so much as _mentioning_ any of its incredibly obvious costs ... that's just _pathetic_. If Yudkowsky's ability to explore the space of arguments is _that_ bad, why would you trust his opinion about _anything_?
-
-[TODO: discrediting to the community]
-
-The disclaimer comment mentions "speakable and unspeakable arguments"—but what, one wonders, is the boundary of the "speakable"? In response to a commenter mentioning the cost of having to remember pronouns as a potential counterargument, Yudkowsky [offers us another clue](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228&reply_comment_id=10159421871809228):
-
-> People might be able to speak that. A clearer example of a forbidden counterargument would be something like e.g. imagine if there was a pair of experimental studies somehow proving that (a) everybody claiming to experience gender dysphoria was lying, and that (b) they then got more favorable treatment from the rest of society. We wouldn't be able to talk about that. No such study exists to the best of my own knowledge, and in this case we might well hear about it from the other side to whom this is the exact opposite of unspeakable; but that would be an example.
-
-(As an aside, the wording of "we might well hear about it from _the other side_" (emphasis mine) is _very_ interesting, suggesting that the so-called "rationalist" community, is, effectively, a partisan institution, despite its claims to be about advancing the generically human art of systematically correct reasoning.)
-
-I think (a) and (b) _as stated_ are clearly false, so "we" (who?) fortunately aren't losing much by allegedly not being able to speak them. But what about some _similar_ hypotheses, that might be similarly unspeakable for similar reasons?
-
-Instead of (a), consider the claim that (a′) self-reports about gender dysphoria are substantially distorted by [socially-desirable responding tendencies](https://en.wikipedia.org/wiki/Social-desirability_bias)—as a notable and common example, heterosexual males with [sexual fantasies about being female](http://www.annelawrence.com/autogynephilia_&_MtF_typology.html) [often falsely deny or minimize the erotic dimension of their desire to change sex](/papers/blanchard-clemmensen-steiner-social_desirability_response_set_and_systematic_distortion.pdf) (The idea that self-reports can be motivatedly inaccurate without the subject consciously "lying" should not be novel to someone who co-blogged with [Robin Hanson](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain) for years!)
-
-And instead of (b), consider the claim that (b′) transitioning is socially rewarded within particular _subcultures_ (although not Society as a whole), such that many of the same people wouldn't think of themselves as trans or even gender-dysphoric if they lived in a different subculture.
-
-I claim that (a′) and (b′) are _overwhelmingly likely to be true_. Can "we" talk about _that_? Are (a′) and (b′) "speakable", or not? We're unlikely to get clarification from Yudkowsky, but based on the Whole Dumb Story I've been telling you about how I wasted the last six years of my life on this, I'm going to _guess_ that the answer is broadly No: no, "we" can't talk about that. (_I_ can say it, and people can debate me in a private Discord server where the general public isn't looking, but it's not something someone of Yudkowsky's stature can afford to acknowledge.)
-
-But if I'm right that (a′) and (b′) should be live hypotheses and that Yudkowsky would consider them "unspeakable", that means "we" can't talk about what's _actually going on_ with gender dysphoria and transsexuality, which puts the whole discussion in a different light. In another comment, Yudkowsky lists some gender-transition interventions he named the [November 2018 "hill of meaning in defense of validity" Twitter thread](https://twitter.com/ESYudkowsky/status/1067183500216811521)—using a different bathroom, changing one's name, asking for new pronouns, and getting sex reassignment surgery—and notes that none of these are calling oneself a "woman". [He continues](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421986539228&reply_comment_id=10159424960909228):
-
-> [Calling someone a "woman"] _is_ closer to the right sort of thing _ontologically_ to be true or false. More relevant to the current thread, now that we have a truth-bearing sentence, we can admit of the possibility of using our human superpower of language to _debate_ whether this sentence is indeed true or false, and have people express their nuanced opinions by uttering this sentence, or perhaps a more complicated sentence using a bunch of caveats, or maybe using the original sentence uncaveated to express their belief that this is a bad place for caveats. Policies about who uses what bathroom also have consequences and we can debate the goodness or badness (not truth or falsity) of those policies, and utter sentences to declare our nuanced or non-nuanced position before or after that debate.
->
-> Trying to pack all of that into the pronouns you'd have to use in step 1 is the wrong place to pack it.
-
-Sure, _if we were in the position of designing a constructed language from scratch_ under current social conditions in which a person's "gender" is understood as a contested social construct, rather than their sex being an objective and undisputed fact, then yeah: in that situation _which we are not in_, you definitely wouldn't want to pack sex or gender into pronouns. But it's a disingenuous derailing tactic to grandstand about how people need to alter the semantics of their _already existing_ native language so that we can discuss the real issues under an allegedly superior pronoun convention when, _by your own admission_, you have _no intention whatsoever of discussing the real issues!_
-
-(Lest the "by your own admission" clause seem too accusatory, I should note that given constant behavior, admitting it is _much_ better than not-admitting it; so, huge thanks to Yudkowsky for the transparency on this point!)
-
-Again, as discussed in "Challenges to Yudkowsky's Pronoun Reform Proposal", a comparison to [the _tú_/_usted_ distinction](https://en.wikipedia.org/wiki/Spanish_personal_pronouns#T%C3%BA/vos_and_usted) is instructive. It's one thing to advocate for collapsing the distinction and just settling on one second-person singular pronoun for the Spanish language. That's principled.
-
-It's quite another thing altogether to _simultaneously_ try to prevent a speaker from using _tú_ to indicate disrespect towards a social superior (on the stated rationale that the _tú_/_usted_ distinction is dumb and shouldn't exist), while _also_ refusing to entertain or address the speaker's arguments explaining _why_ they think their interlocutor is unworthy of the deference that would be implied by _usted_ (because such arguments are "unspeakable" for political reasons). That's just psychologically abusive.
-
-If Yudkowsky _actually_ possessed (and felt motivated to use) the "ability to independently invent everything important that would be on the other side of the filter and check it [himself] before speaking", it would be _obvious_ to him that "Gendered Pronouns For Everyone and Asking To Leave The System Is Lying" isn't the hill anyone would care about dying on if it weren't a Schelling point. A lot of TERF-adjacent folk would be _overjoyed_ to concede the (boring, insubstantial) matter of pronouns as a trivial courtesy if it meant getting to _actually_ address their real concerns of "Biological Sex Actually Exists", and ["Biological Sex Cannot Be Changed With Existing or Foreseeable Technology"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and "Biological Sex Is Sometimes More Relevant Than Subjective Gender Identity." The reason so many of them are inclined to stand their ground and not even offer the trivial courtesy is because they suspect, correctly, that the matter of pronouns is being used as a rhetorical wedge to try to prevent people from talking or thinking about sex.
-
-Having analyzed the _ways_ in which Yudkowsky is playing dumb here, what's still not entirely clear is _why_. Presumably he cares about maintaining his credibility as an insightful and fair-minded thinker. Why tarnish that by putting on this haughty performance?
-
-Of course, presumably he _doesn't_ think he's tarnishing it—but why not? [He graciously explains in the Facebook comments](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228&reply_comment_id=10159421901809228):
-
-> it is sometimes personally prudent and not community-harmful to post your agreement with Stalin about things you actually agree with Stalin about, in ways that exhibit generally rationalist principles, especially because people do _know_ they're living in a half-Stalinist environment [...] I think people are better off at the end of that.
-
-Ah, _prudence_! He continues:
-
-> I don't see what the alternative is besides getting shot, or utter silence about everything Stalin has expressed an opinion on including "2 + 2 = 4" because if that logically counterfactually were wrong you would not be able to express an opposing opinion.
-
-The problem with trying to "exhibit generally rationalist principles" in an line of argument that you're constructing in order to be prudent and not community-harmful, is that you're thereby necessarily _not_ exhibiting the central rationalist principle that what matters is the process that _determines_ your conclusion, not the reasoning you present to _reach_ your conclusion, after the fact.
-
-The best explanation of this I know of was authored by Yudkowsky himself in 2007, in a post titled ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument). It's worth quoting at length. The Yudkowsky of 2007 invites us to consider the plight of a political campaign manager:
-
-> As a campaign manager reading a book on rationality, one question lies foremost on your mind: "How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?"
->
-> Sorry. It can't be done.
->
-> "What?" you cry. "But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes's Rule?"
->
-> Sorry. It still can't be done. You defeated yourself the instant you specified your argument's conclusion in advance.
-
-The campaign manager is in possession of a survey of mayoral candidates on which Snodgrass compares favorably to other candidates, except for one question. The post continues (bolding mine):
-
-> So you are tempted to publish the questionnaire as part of your own campaign literature ... with the 11th question omitted, of course.
->
-> **Which crosses the line between _rationality_ and _rationalization_.** It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence.
->
-> Indeed, **you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it.** "What!" you cry. "A campaign should publish facts unfavorable to their candidate?" But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn't, if you were genuinely curious. If you were flowing _forward_ from the evidence to an unknown choice of candidate, rather than flowing _backward_ from a fixed candidate to determine the arguments.
-
-The post then briefly discusses the idea of a "logical" argument, one whose conclusions follow from its premises. "All rectangles are quadrilaterals; all squares are quadrilaterals; therefore, all squares are rectangles" is given as an example of _illogical_ argument, even though both premises are true (all rectangles and squares are in fact quadrilaterals) _and_ the conclusion is true (all squares are in fact rectangles). The problem is that the conclusion doesn't _follow_ from the premises; the _reason_ all squares are rectangles isn't _because_ they're both quadrilaterals. If we accepted arguments of the general _form_ "all A are C; all B are C; therefore all A are B", we would end up believing nonsense.
-
-Yudkowsky's conception of a "rational" argument—at least, Yudkowsky's conception in 2007, which the Yudkowsky of the current year seems to disagree with—has a similar flavor: the stated reasons should be the actual reasons. The post concludes:
-
-> If you really want to present an honest, rational argument _for your candidate_, in a political campaign, there is only one way to do it:
->
-> * _Before anyone hires you_, gather up all the evidence you can about the different candidates.
-> * Make a checklist which you, yourself, will use to decide which candidate seems best.
-> * Process the checklist.
-> * Go to the winning candidate.
-> * Offer to become their campaign manager.
-> * When they ask for campaign literature, print out your checklist.
->
-> Only in this way can you offer a _rational_ chain of argument, one whose bottom line was written flowing _forward_ from the lines above it. Whatever _actually_ decides your bottom line is the only thing you can _honestly_ write on the lines above.
-
-I remember this being pretty shocking to read back in 'aught-seven. What an alien mindset! But it's _correct_. You can't rationally argue "for" a chosen conclusion, because only the process you use to _decide what to argue for_ can be your real reason.
-
-This is a shockingly high standard for anyone to aspire to live up to—but what made Yudkowsky's Sequences so life-changingly valuable, was that they articulated the _existence_ of such a standard. For that, I will always be grateful.