-Moldbug claims that the status of non-Asian minorities in contemporary America is analogous to that of the nobles in his parable. But separately from denouncing the system as unfair, Moldbug furthermore claims that the asymmetric rules have deleterious effects _on the beneficiaries themselves_:
-
-> applied to the cream of America's actual WASP–Ashkenazi aristocracy, genuine genetic elites with average IQs of 120, long histories of civic responsibility and productivity, and strong innate predilections for delayed gratification and hard work, I'm confident that this bizarre version of what we can call _ignoble privilege_ would take no more than two generations to produce a culture of worthless, unredeemable scoundrels. Applied to populations with recent hunter-gatherer ancestry and no great reputation for sturdy moral fiber, _noblesse sans oblige_ is a recipe for the production of absolute human garbage.
-
-This is the sort of right-wing heresy that I could read about on the internet (as I read lots of things on the internet without necessarily agreeing), and see the argument abstractly, without really putting any serious weight on it.
-
-It wasn't my place. I'm not a woman or a racial minority; I don't have their lived experience; I _don't know what it's like_ to face the challenges they face. So while I could permissibly _read blog posts_ skeptical of the progressive story about redressing wrongs done to designated sympathetic victim groups, I didn't think of myself as having standing to seriously doubt the story.
-
-Until suddenly, in what was then the current year of 2016, it was now seeming that the designated sympathetic victim group of our age was ... _straight boys who wish they were girls_. And suddenly, [_I had standing_](/2017/Feb/a-beacon-through-the-darkness-or-getting-it-right-the-first-time/). When a political narrative is being pushed for _your_ alleged benefit, it's much easier to make the call that it's obviously full of lies.
-
-The claim that political privileges are inculcating "a culture of worthless, unredeemable scoundrels" in some _other_ group is easy to dimiss as bigotry, but it hits differently when you can see it happening to _people like you_. Notwithstanding whether the progressive story had been right about the trevails of blacks and women, I _know_ that straight boys who wish they were girls are not actually as fragile and helpless as we were being portrayed—that we _weren't_ that fragile, if anyone still remembers the world of 2006, when straight boys who wished they were girls knew that they were, in fact, straight boys, and didn't think the world owed them deference for their perversion. And this experience _did_ raise further questions about whether previous iterations of progressive ideology had been entirely honest with me. (If nothing else, I couldn't help but notice that my update from "Blanchard is probably wrong because trans women's self-reports say it's wrong" to "Self-reports are pretty crazy" probably had implications for "[Red Pill](https://heartiste.org/the-sixteen-commandments-of-poon/) is probably wrong because women's self-reports say it's wrong".)
-
-While I was in this flurry of excitement about my recent updates and the insanity around me, I thought back to that "at least 20% of the ones with penises are actually women" Yudkowsky post from back in March that had been my wake-up call to all this. What _was_ going on with that?
-
-I wasn't, like, _friends_ with Yudkowsky, obviously; I didn't have a natural social affordance to _just_ ask him the way you would ask a work buddy or a college friend something. But ... he _had_ posted about how he was willing to accept money to do things he otherwise wouldn't in exchange for enough money to feel happy about he trade—a Happy Price, or [Cheerful Price, as the custom was later termed](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)—and his [schedule of happy prices](https://www.facebook.com/yudkowsky/posts/10153956696609228) listed $1,000 as the price for a 2 hour conversation, and I had his email address from previous contract work I had done for MIRI back in '12, so I wrote him offering $1,000 to talk about sometime between [January 2009](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and [March of the current year](https://www.facebook.com/yudkowsky/posts/10154078468809228) what kind of _massive_ update he made on the topics of human psychological sex differences and MtF transsexuality, mentioning that I had been "feeling baffled and disappointed (although I shouldn't be) that the rationality community is getting this _really easy_ scientific question wrong."
-
-At this point, any _normal people_ who are (somehow?) reading this might be thinking, isn't that weird and a little cultish?—some blogger you follow posted something you thought was strange earlier this year, and you want to pay him _one grand_ to talk about it?
-
-To the normal person I would explain thusly. First, in our subculture, we don't have your weird hangups about money: people's time is valuable, and paying people money in exchange for them using their time differently from how they otherwise would is a perfectly ordinary thing for microeconomic agents to do. Upper-middle class normal people don't blink at paying a licensed therapist $100 to talk for an hour, because their culture designates that as a special ritualized context in which paying money to talk to someone isn't weird. In my culture, we don't need the special ritualized context; Yudkowsky just had a higher rate than most therapists.
-
-Second, $1000 isn't actually real money to a San Francisco software engineer.
-
-Third—yes. Yes, it _absolutely_ was a little cultish.
-
-[TODO: explain religion]
-
-One of my emails included the sentence, "I feel awful writing _Eliezer Yudkowsky_ about this, because my interactions with you probably have disproportionately more simulation-measure than the rest of my life, and do I _really_ want to spend that on _this topic_?"
-
-(Referring to the idea that, in a sufficiently large universe where many subjectively-indistinguishable copies of everyone exists, including inside of future superintelligences running simulations of the past, there would plausibly be _more_ copies of my interactions with Yudkowsky than of other moments of my life, on account of that information being of greater decision-relevance to those superintelligences.)
-
-[TODO: I can't actually confirm or deny whether he accepted the happy price offer because if we did talk, it would have been a private conversation]
-
-]
-
-(It was also around this time that I snuck a copy of _Men Trapped in Men's Bodies_ into the [MIRI](https://intelligence.org/) office library, which was sometimes possible for community members to visit. It seemed like something Harry Potter-Evans-Verres would do—and ominously, I noticed, not like something Hermione Granger would do.)
-
-Gatekeeping sessions finished, I finally started HRT at the end of December 2016. In an effort to not let my anti–autogynephilia-denialism crusade take over my life, earlier that month, I promised myself (and [published the SHA256 hash of the promise](https://www.facebook.com/zmdavis/posts/10154596054540199)) not to comment on gender issues under my real name through June 2017—_that_ was what my new pseudonymous blog was for.
-
-... the promise didn't take. There was just too much gender-identity nonsense on my Facebook feed; I _had_ to push back on some of it.
-
-"Folks, I'm not sure it's feasible to have an intellectually-honest real-name public conversation about the etiology of MtF," I wrote in one thread. "If no one is willing to mention some of the key relevant facts, maybe it's less misleading to just say nothing."
-
-As a result of that, I got a PM from a woman whose marriage had fallen apart after (among other things) her husband transitioned. She told me about the parts of her husband's story that had never quite made sense to her (but which sounded like a textbook case from my reading). In her telling, the husband was always more emotionally tentative and less comfortable with the standard gender role and status stuff, but in the way of like, a geeky nerd guy, not in the way of someone feminine. He was into crossdressing sometimes, but she had thought that was just a weird and insignificant kink, not that he didn't like being a man—until they moved to the Bay Area and he fell in with a social-justicey crowd. When I linked her to Kay Brown's article on ["Advice for Wives and Girlfriends of Autogynephiles"](https://sillyolme.wordpress.com/advice-for-wivesgirlfriends-of-autogynephiles/), her response was, "Holy shit, this is _exactly_ what happened with me."
-
-[TODO: the story of my Facebook crusade, going off the rails, getting hospitalized]
-
-A striking pattern from my attempts to argue with people about the two-type taxonomy was the tendency for the conversation to get derailed on some variation of "Well, the word _woman_ doesn't necessarily mean that," often with a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), a 2014 post by Scott Alexander arguing that because categories exist in our model of the world rather than the world itself, there's nothing wrong with simply _defining_ trans people to be their preferred gender, in order to alleviate their dysphoria.
-
-This ... really wasn't what I was trying to talk about. _I_ thought I was trying to talk about autogynephilia as an _empirical_ theory of psychology, the truth or falsity of which obviously cannot be altered by changing the meanings of words.
-
-Psychology is a complicated empirical science: no matter how "obvious" I might think something is, I have to admit that I could be wrong—[not just as an obligatory profession of humility, but _actually_ wrong in the real world](https://www.lesswrong.com/posts/GrDqnMjhqoxiqpQPw/the-proper-use-of-humility). If my fellow rationalists weren't sold on the autogynephilia and transgender thing, I might be a bit disappointed, but it's definitely not grounds to denounce the entire community as a failure or a fraud.
-
-But this "I can define the word _woman_ any way I want" mind game? _That_ part was _absolutely_ clear-cut. That part of the argument, I knew I could win.
-
-To be clear, it's _true_ that categories exist in our model of the world, rather than the world itself—the "map", not the "territory"—and it's true that trans women might be women _with respect to_ some genuinely useful definition of the word "woman." However, the Scott Alexander piece that people kept linking to me goes further, claiming that we can redefine gender categories _in order to make trans people feel better_:
-
-> I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.
-
-But this is just wrong. Categories exist in our model of the world _in order to_ capture empirical regularities in the world itself: the map is supposed to _reflect_ the territory, and there _are_ "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. [We had a whole Sequence about this](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) back in 'aught-eight. Alexander cites [a post](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside) from that Sequence in support of the (true) point about how categories are "in the map" ... but if you actually read the Sequence, another point that Yudkowsky pounds home _over and over and over again_, is that word and category definitions are nevertheless _not_ arbitrary, because there are criteria that make some definitions _perform better_ than others as "cognitive technology"—
-
-> ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences)
-
-> ["So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
-
-> ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions)
-
-> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)
-
-> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)
-
-> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences)
-
-> ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations)
-
-> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words)
-
-> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes)
-
-> ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace)
-
-Importantly, this is a very general point about how language itself works _that has nothing to do with gender_. No matter what you believe about politically controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if [positive consequence]" is not correct philosophy, _independently of the particular values of X and Y_.
-
-So, because at this point I still trusted people in my robot cult to be intellectually honest rather than fucking with me because of their political incentives, I took the bait. When I quit my dayjob in order to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (which Alexander [graciously included in his next linkpost](https://archive.ph/irpfd#selection-1625.53-1629.55)). A few months later (having started a new dayjob), I followed it up with ["Reply to _The Unit of Caring_ on Adult Human Females"](/2018/Apr/reply-to-the-unit-of-caring-on-adult-human-females/), responding to a similar argument. I'm proud of those posts: I think Alexander's and _Unit of Caring_'s arguments were incredibly dumb, and with a lot of effort, I think I did a pretty good job of explaining exactly why.
-
-At this point, I was certainly _disappointed_ with my impact, but not to the point of bearing much hostility to "the community". People had made their arguments, and I had made mine; I didn't think I was _entitled_ to anything more than that.
-
-[TODO: I was at the company offsite browsing Twitter (which I had recently joined with fantasies of self-cancelling) when I saw the "Hill of Validity in Defense of Meaning", and I _flipped the fuck out_—exhaustive breakdown of exactly what's wrong ; I trusted Yudkowsky and I _did_ think I was entitled to more]
-
-[TODO: getting support from Michael + Ben + Sarah, harrassing Scott and Eliezer]