—[repeatedly](https://twitter.com/ESYudkowsky/status/1067198993485058048):
-> You're mistaken about what the word means to you, I demonstrate thus: https://en.wikipedia.org/wiki/XX_male_syndrome
+> You're mistaken about what the word means to you, I demonstrate thus: [https://en.wikipedia.org/wiki/XX_male_syndrome](https://en.wikipedia.org/wiki/XX_male_syndrome)
>
> But even ignoring that, you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.
If Yudkowsky was _already_ stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (Also, I hadn't intended to talk about gender on that account yet, although that seemed relatively unimportant in light of the present cause for flipping out.)
-It seemed better to try to clear this up in private. I still had Yudkowsky's email address. I felt bad bidding for his attention over my gender thing _again_—but I had to do _something_. Hands trembling, I sent him an email asking him to read my ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), suggesting that it may qualify as an answer to [his question about "a page [he] could read to find a non-confused exclamation of how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232)—and that, because I cared very much about correcting what I claimed were confusions in my rationalist subculture, that I would be happy to pay up to $1000 for his time—and that, if he liked the post, he might consider Tweeting a link—and that I was cc'ing my friends Anna Salamon and Michael Vassar as a character reference (Subject: "another offer, $1000 to read a ~6500 word blog post about (was: Re: Happy Price offer for a 2 hour conversation)"). Then I texted Anna and Michael begging them to chime in and vouch for my credibility.
+It seemed better to try to clear this up in private. I still had Yudkowsky's email address, last used when [I had offered to pay to talk about his theory of MtF two years before](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price). I felt bad bidding for his attention over my gender thing _again_—but I had to do _something_. Hands trembling, I sent him an email asking him to read my ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), suggesting that it may qualify as an answer to [his question about "a page [he] could read to find a non-confused exclamation of how there's scientific truth at stake"](https://twitter.com/ESYudkowsky/status/1067482047126495232)—and that, because I cared very much about correcting what I claimed were confusions in my rationalist subculture, that I would be happy to pay up to $1000 for his time—and that, if he liked the post, he might consider Tweeting a link—and that I was cc'ing my friends Anna Salamon and Michael Vassar as a character reference (Subject: "another offer, $1000 to read a ~6500 word blog post about (was: Re: Happy Price offer for a 2 hour conversation)"). Then I texted Anna and Michael begging them to chime in and vouch for my credibility.
The monetary offer, admittedly, was awkward: I included another paragraph clarifying that any payment was only to get his attention, and not _quid quo pro_ advertising, and that if he [didn't trust his brain circuitry](https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans) not to be corrupted by money, then he might want to reject the offer on those grounds and only read the post if he expected it to be genuinely interesting.
-Again, I realize this must seem weird and cultish to any normal people reading this. (Paying some blogger you follow one grand just to _read_ one of your posts? What? Why? Who _does_ that?) To this, I again refer to [the reasons justifying my 2016 cheerful price offer](/2023/Jun/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price-reasons)—and that, along with tagging in Anna and Michael, who I thought Yudkowsky respected, it was a way to signal that I _really really really didn't want to be ignored_, which I assumed was the default outcome. An ordinary programmer such as me was as a mere _worm_ in the presence of the great Eliezer Yudkowsky. I wouldn't have had the audacity to contact him at _all_, about _anything_, if I didn't have [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect).
+Again, I realize this must seem weird and cultish to any normal people reading this. (Paying some blogger you follow one grand just to _read_ one of your posts? What? Why? Who _does_ that?) To this, I again refer to [the reasons justifying my 2016 cheerful price offer](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#cheerful-price-reasons)—and that, along with tagging in Anna and Michael, who I thought Yudkowsky respected, it was a way to signal that I _really really really didn't want to be ignored_, which I assumed was the default outcome. An ordinary programmer such as me was as a mere _worm_ in the presence of the great Eliezer Yudkowsky. I wouldn't have had the audacity to contact him at _all_, about _anything_, if I didn't have [Something to Protect](https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect).
Anna didn't reply, but I apparently did interest Michael, who chimed in on the email thread to Yudkowsky. We had a long phone conversation the next day lamenting how the "rationalists" were dead as an intellectual community.
------
-One thing I regret about my behavior during this period was the extent to which I was emotionally dependent on my posse, and in some ways particularly Michael, for validation. I remembered Michael as a high-status community elder back in the _Overcoming Bias_ era (to the extent that there was a "community" in those early days). I had been skeptical of him, then: the guy makes a lot of stridently "out there" assertions by the standards of ordinary social reality, in a way that makes you assume he must be speaking metaphorically. (He always insists that he's being completely literal.) But he had social proof as the President of the Singularity Institute—the "people person" of our world-saving effort, to complement Yudkowsky's anti-social mad scientist personality—which inclined me to take his "crazy"-sounding assertions more charitably than I otherwise would have.
+One thing I regret about my behavior during this period was the extent to which I was emotionally dependent on my posse, and in some ways particularly Michael, for validation. I remembered Michael as a high-status community elder back in the _Overcoming Bias_ era (to the extent that there was a "community" in those early days).[^overcoming-bias] I had been skeptical of him, then: the guy makes a lot of stridently "out there" assertions by the standards of ordinary social reality, in a way that makes you assume he must be speaking metaphorically. (He always insists that he's being completely literal.) But he had social proof as the President of the Singularity Institute—the "people person" of our world-saving effort, to complement Yudkowsky's anti-social mad scientist personality—which inclined me to take his "crazy"-sounding assertions more charitably than I otherwise would have.
+
+[^overcoming-bias]: Yudkowsky's Sequences (except the [last](https://www.lesswrong.com/s/pvim9PZJ6qHRTMqD3)) had originally been published on [_Overcoming Bias_](https://www.overcomingbias.com/) before the creation of _Less Wrong_ in early 2009.
Now, the memory of that social proof was a lifeline. Dear reader, if you've never been in the position of disagreeing with the entire weight of Society's educated opinion, _including_ your idiosyncratic subculture that tells itself a story about being smarter and more open-minded than the surrounding Society—well, it's stressful. [There was a comment on the /r/slatestarcodex subreddit around this time](https://old.reddit.com/r/slatestarcodex/comments/anvwr8/experts_in_any_given_field_how_would_you_say_the/eg1ga9a/) that cited Yudkowsky, Alexander, Piper, Ozy Frantz, and Rob Bensinger as leaders of the "rationalist" community—just an arbitrary Reddit comment of no significance whatsoever—but it was salient indicator of the _Zeitgeist_ to me, because _[every](https://twitter.com/ESYudkowsky/status/1067183500216811521) [single](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) [one](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/) of [those](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) [people](/images/bensinger-doesnt_unambiguously_refer_to_the_thing.png)_ had tried to get away with some variant on the "word usage is subjective, therefore you have no grounds to object to the claim that trans women are women" _mind game_.
I could see how, under ordinary circumstances, asking Yudkowsky to weigh in on my post would be inappropriately demanding of a Very Important Person's time, given that an ordinary programmer such as me was surely as a mere _worm_ in the presence of the great Eliezer Yudkowsky. (I would have humbly given up much sooner if I hadn't gotten social proof from Michael and Ben and Sarah and "Riley" and Jessica.)
-But the only reason for my post to exist was because it would be even _more_ inappropriately demanding to ask for a clarification in the original gender-political context. The game theorist Thomas Schelling once wrote about the use of clever excuses to help one's negotiating counterparty release themselves from a prior commitment: "One must seek [...] a rationalization by which to deny oneself too great a reward from the opponent's concession, otherwise the concession will not be made."[^schelling] This is sort of what I was trying to do when soliciting—begging for—engagement-or-endorsement of "... Boundaries?" By making the post be about dolphins, I was trying to deny myself too great of a reward _on the gender-politics front_. I _don't_ think it was inappropriately demanding to expect "us" (him) to be correct _about the cognitive function of categorization_. (If not, why pretend to have a "rationality community" at all?) I was _trying_ to be as accomodating as I could, short of just letting him (us?) be wrong.
+But the only reason for my post to exist was because it would be even _more_ inappropriately demanding to ask for a clarification in the original gender-political context. The economist Thomas Schelling (of "Schelling point" fame) once wrote about the use of clever excuses to help one's negotiating counterparty release themselves from a prior commitment: "One must seek [...] a rationalization by which to deny oneself too great a reward from the opponent's concession, otherwise the concession will not be made."[^schelling] This is what I was trying to do when soliciting—begging for—engagement-or-endorsement of "... Boundaries?" By making the post be about dolphins, I was trying to deny myself too great of a reward _on the gender-politics front_. I _don't_ think it was inappropriately demanding to expect "us" (him) to be correct _about the cognitive function of categorization_. (If not, why pretend to have a "rationality community" at all?) I was _trying_ to be as accomodating as I could, short of just letting him (us?) be wrong.
+
+I would have expected him to see why we had to make a stand _here_, where the principles of reasoning that made it possible for words to be assigned interpretations at all, were under threat.
+
+A hill of validity in defense of meaning.
[^schelling]: _The Strategy of Conflict_, Ch. 2, "An Essay on Bargaining"
Maybe that's not how politics works? Could it be that, somehow, the mob-punishment mechanisms that weren't smart enough to understand the concept of "bad argument (categories are arbitrary) for a true conclusion (trans people are OK)", _were_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs didn't think they could endorse my correct philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies?
-Jessica mentioned talking with someone about me writing to Yudkowsky and Alexander requesting that they clarify the category boundary thing. This person described having a sense that I should have known that that wouldn't work—because of the politics involved, not because I wasn't right. I thought Jessica's takeaway was very poignant:
+Jessica mentioned talking with someone about me writing to Yudkowsky and Alexander requesting that they clarify the category boundary issue. This person described having a sense that I should have known that that wouldn't work—because of the politics involved, not because I wasn't right. I thought Jessica's takeaway was very poignant:
> Those who are savvy in high-corruption equilibria maintain the delusion that high corruption is common knowledge, to justify expropriating those who naively don't play along, by narratizing them as already knowing and therefore intentionally attacking people, rather than being lied to and confused.
But (as I told the LCSW) I would _know_ that I was cherry-picking. HSTS-taxon boys are identified as effeminate _by others_. [You know it when you see it, even when you're ideologically prohibited from _knowing_ that you know.](/2022/May/gaydar-jamming/) That's—not me. I [don't even _want_ that to be me](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#if-i-have-to-choose). I definitely have a gender _thing_, but I have a pretty detailed model of what I think the thing actually is in the real physical universe, and my model doesn't _fit_ in the ever-so-compassionate and -equitable ontology of "gender identity", which presupposes that what's going on when I report _wishing_ I were female is the _same thing_ as what's going on with actual women who (objectively correctly) report being female. I don't think it's the same thing, and I think you'd have to be [crazy or a liar](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) to say it's plausibly the same thing.
+I could sympathize with patients in an earlier era of trans healthcare who felt that they had no choice but to lie—to conform to the doctors' conception of a "true transsexual" on pain of being denied treatment. This was not the situation I saw on the ground in the Bay Area of 2016.
+
+If a twentieth-century stalemate of patients lying to skeptical doctors had congealed into a culture of scripted conformity, why had it persisted long after the doctors stopped being skeptical and the lies served no remaining purpose? Why couldn't everyone just snap out of it?
+
--------
Another consequence of my Blanchardian enlightenment is that around this time was my break with progressive morality. I had never _really_ been progressive, as such. (I was registered to vote as a Libertarian, the legacy of a teenage dalliance with Ayn Rand and the [greater](https://web.archive.org/web/20070531085902/http://www.reason.com/blog/) [libertarian](https://praxeology.net/unblog07-06.htm) [blogosphere](https://cafehayek.com/).) But there was still an embedded assumption, reflected in [my antisexist faith](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), that, as far as America's culture wars went, I was unambiguously on the right (_i.e._, left) side of history, [the Blue Team and not the Red Team](http://zackmdavis.net/blog/2017/03/brand-rust/).
While I was in this flurry of excitement about my recent updates and the insanity around me, I thought back to that Yudkowsky post from back in March that had been my wake-up call to all this. ("I think I'm over 50% probability at this point that at least 20% of the ones with penises are actually women"!!) What _was_ going on with that?
-<a id="cheerful-price"></a>I wasn't _friends_ with Yudkowsky, obviously; I didn't have a natural social affordance to _just_ ask him the way you would ask a dayjob or college acquaintance something. But ... he _had_ posted about how he was willing to accept money to do things he otherwise wouldn't in exchange for enough money to feel happy about the trade—a Happy Price, or [Cheerful Price, as the custom was later termed](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)—and his [schedule of happy prices](https://www.facebook.com/yudkowsky/posts/10153956696609228) listed $1,000 as the price for a 2 hour conversation. I had his email address from previous contract work I had done for MIRI back in 'twelve, so on 29 September 2016, I wrote him offering $1,000 to talk about what kind of _massive_ update he made on the topics of human psychological sex differences and MtF transsexuality sometime between [January 2009](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and [March of the current year](https://www.facebook.com/yudkowsky/posts/10154078468809228), mentioning that I had been "feeling baffled and disappointed (although I shouldn't be) that the rationality community is getting this _really easy_ scientific question wrong."
+<a id="cheerful-price"></a>I wasn't _friends_ with Yudkowsky, obviously; I didn't have a natural social affordance to _just_ ask him the way you would ask a dayjob or college acquaintance something. But ... he _had_ posted about how he was willing to accept money to do things he otherwise wouldn't in exchange for enough money to feel happy about the trade—a Happy Price, or [Cheerful Price, as the custom was later termed](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)—and his [schedule of happy prices](https://www.facebook.com/yudkowsky/posts/10153956696609228) listed $1,000 as the price for a 2 hour conversation. I had his email address from previous contract work I had done for MIRI back in 'twelve, so on 29 September 2016, I wrote him offering $1,000 to talk about what kind of _massive_ update he made on the topics of human psychological sex differences and MtF transsexuality sometime between [January 2009](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions) and [March of the current year](https://www.facebook.com/yudkowsky/posts/10154078468809228), mentioning that I had been "feeling baffled and disappointed (although I shouldn't be) that the rationality community is getting this _really easy_ scientific question wrong" (Subject: "Happy Price offer for a 2 hour conversation").
<a id="cheerful-price-reasons"></a>At this point, any _normal people_ who are (somehow?) reading this might be thinking, isn't that weird and kind of cultish? Some blogger you follow posted something you thought was strange earlier this year, and you want to pay him _one grand_ to talk about it? To the normal person I would explain thusly—
near editing tier—
✓ scan through "far editing tier" list and scoot anything applicable to the near editing tier
-_ scan through pt. 1½ and extract highlights to include in pt. 1–6
✓ explain that "A Human's Guide to Words" / 37 Ways are the same text
-_ explain the title
-_ Caliphate / craft and the community
-_ happy price subject line in pt. 1
-_ incentives of gatekeeping and system-mandated lies
-_ conflict theory of transness!!
-_ explain back in the _Overcoming Bias_ era
-_ smooth out "still had his email" callback
+✓ explain the title
+✓ happy price subject line in pt. 1
+✓ incentives of gatekeeping and system-mandated lies
+✓ explain back in the _Overcoming Bias_ era
+✓ smooth out "still had his email" callback
+
+_ scan through pt. 1½ and extract highlights to include in pt. 1–6
+
_ explain Michael's gaslighting charge, using the "bowels of Christ" language
_ "Margaret" discussion needs to cover the part where I'd cause less disruption if I transitioned
_ so embarrassed after the Singularity
_ mention my "trembling hand" history with secrets, not just that I don't like it
_ Eric Weinstein, who was not making this mistake
-_ scan through "People" for any more details I want to incorporate later
_ mention "special simulator attention" in cxn with https://en.wikipedia.org/wiki/Ideas_of_reference_and_delusions_of_reference
_ pt. 1 end needs to mention BABSCon (back referenced)
-_ more self-awareness about the ethical
+_ clarify that I'm not a sex work prohibitionist
_ born five years too early
_ you didn't do a good job explaining to me why someone would think that being bullied into selective silence doesn't constitute a conflict strategy
_ mention HRT pause / stop in pt. 1 end
people to consult specifically before pt. 1–3:
sent, need approval for pt. 1-2—
-_ hostile prereader, maybe?? (first-choice: April)
-- professional editor
✓ Tail (AGP discussion)
✓ Anna
✓ Ben/Jessica (Michael)
✓ "Riley"
✓ Scott
+- professional editor
- "Thomas"
- "Rebecca"
- Sarah
_ Yudkowsky's LW moderation policy
far editing tier—
+_ Caliphate / craft and the community
_ colony ship happiness lie in https://www.lesswrong.com/posts/AWaJvBMb9HGBwtNqd/qualitative-strategies-of-friendliness
_ re being fair to abusers: thinking about what behavior pattern you would like to see, generally, by people in your situation, instead of privileging the perspective and feelings of people who've made themselves vulnerable to you by transgressing against you
_ body odors comment