> If we can't even get a public consensus from our _de facto_ leadership on something _so basic_ as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's _no point in pretending to have a rationalist community_, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
-And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feelless aggrieved.) Was I wrong to interpet this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)
+And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feel less aggrieved.) Was I wrong to interpret this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)
Separately, one evening in April, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, [Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. This culminated in a group conversation with the entire house, which I found unsettling. (Unfortunately, I didn't take notes and don't remember the details except that I had a sense of everyone else seeming to agree on things that I thought were clearly contrary to the spirit of the Sequences.)
I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether or not I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake even after it was pointed out, then the "rationalists" were _worse than useless_ to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of a reference class that included AI researchers).
-Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
+Fundamentally, I was skeptical that you _could_ do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
[^plausibly]: I was still deep enough in my hero-worship that I wrote "plausibly". Today, I would not consider the adverb necessary.
I said, I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a goddamned _MIRI research associate_? Not to demonize Kosoy, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could beat it out of me.
-Steven objected that tractibility and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's graviational field directly impedes NASA's mession, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort trying to reduce the Earth's gravity (_viz._, zero).
+Steven objected that tractability and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's gravitational field directly impedes NASA's mission, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort trying to reduce the Earth's gravity (_viz._, zero).
I agreed that tractability needs to be addressed, but the situation felt analogous to being in a coal mine in which my favorite one of our canaries had just died. Caliphate officials (Yudkowsky, Alexander, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, which was the direct cause of why I-in-particular was freaking out, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue?
When I mentioned this to Michael on Signal on 3 August 2019, he replied:
-> The person is me, the whole process is a hit piece, literally, the investigation process and not the content. Happy to share the latter with you. You can talk with Ben about appropiate ethical standards.
+> The person is me, the whole process is a hit piece, literally, the investigation process and not the content. Happy to share the latter with you. You can talk with Ben about appropriate ethical standards.
-In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. I count this kind of situation as another reason to be [annoyed at how norms protecting confidentiality](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#privacy-constraints) distort information; Kelsey apparently felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person (because I naïvely assumed that if Michael had been the person who threatened to sue, Kelsey would have said that). I can't say I never introduce this kind of disortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it.
+In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. I count this kind of situation as another reason to be [annoyed at how norms protecting confidentiality](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#privacy-constraints) distort information; Kelsey apparently felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person (because I naïvely assumed that if Michael had been the person who threatened to sue, Kelsey would have said that). I can't say I never introduce this kind of distortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it.
As far as appropriate ethical standards go, I didn't approve of silencing critics with lawsuit threats, even while I agreed with Michael that "the process is the punishment." I imagine that if the REACH wanted to publish a report about me, I would expect to defend myself in public, having faith that the [beautiful weapon](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) of my Speech would carry the day against a corrupt community center—or for that matter, against /r/SneerClub.
To be fair, the same comment I quoted also lists "Being able to consider and optimize literary qualities" is one of the major considerations to be balanced. But I think (_I_ think) it's also fair to note that (as we had seen on _Less Wrong_ earlier that year), lip service is cheap. It's easy to say, "Of course I don't think politeness is more important than truth," while systematically behaving as if you did.
-"Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is _M_, the _i_-th commenter's estimate of mistakenness has an error term of _E<sub>i</sub>_, and commenters leave a negative comment when their estimate _M_ + _E<sub>i</sub>_ is greater than their threshold for commenting _T<sub>i</sub>_, then the comments that get posted will have been selected for erroneous criticism (high _E<sub>i</sub>_) and commmenter chattiness (low _T<sub>i</sub>_).
+"Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is _M_, the _i_-th commenter's estimate of mistakenness has an error term of _E<sub>i</sub>_, and commenters leave a negative comment when their estimate _M_ + _E<sub>i</sub>_ is greater than their threshold for commenting _T<sub>i</sub>_, then the comments that get posted will have been selected for erroneous criticism (high _E<sub>i</sub>_) and commenter chattiness (low _T<sub>i</sub>_).
I can imagine some young person who liked _Harry Potter and the Methods_ being intimidated by the math notation, and uncritically accepting this wisdom from the great Eliezer Yudkowsky as a reason to be less critical, specifically. But a somewhat less young person who isn't intimidated by math should notice that this is just [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean). The same argument applies to praise!
-----
-On 3 November 2019, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.
+On 3 November 2019, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifier for an application depends on the costs and benefits. As a classic example, it's important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator.
I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for what variables you paid attention to, to be motivated by consequences. But given the subspace that's relevant to your interests, you want to run an "epistemically legitimate" clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences.
------
-Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been that it should have been possible to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist.
+Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been that it should have been possible to tell the story of the Category War while Glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist.
The reason it _should_ have been safe to write was because it's good to explain things. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully tell the true story about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_."
-------
-On 12 and 13 November 2019, Ziz [published](https://archive.ph/GQOeg) [several](https://archive.ph/6HsvS) [blog](https://archive.ph/jChxP) [posts](https://archive.ph/TPei9) laying out [her](/2019/Oct/self-identity-is-a-schelling-point/) greviances against MIRI and CfAR. On the fifteenth, Ziz and three collaborators staged a protest at the CfAR reunion being held at a retreat center in the North Bay near Camp Meeker. A call to the police falsely alleged that the protestors had a gun, [resulting in a](http://web.archive.org/web/20230316210946/https://www.pressdemocrat.com/article/news/deputies-working-to-identify-suspects-in-camp-meeker-incident/) [dramatic police reaction](http://web.archive.org/web/20201112041007/https://www.pressdemocrat.com/article/news/authorities-id-four-arrested-in-westminster-woods-protest/) (SWAT team called, highway closure, children's group a mile away being evacuated—the works).
+On 12 and 13 November 2019, Ziz [published](https://archive.ph/GQOeg) [several](https://archive.ph/6HsvS) [blog](https://archive.ph/jChxP) [posts](https://archive.ph/TPei9) laying out [her](/2019/Oct/self-identity-is-a-schelling-point/) grievances against MIRI and CfAR. On the fifteenth, Ziz and three collaborators staged a protest at the CfAR reunion being held at a retreat center in the North Bay near Camp Meeker. A call to the police falsely alleged that the protesters had a gun, [resulting in a](http://web.archive.org/web/20230316210946/https://www.pressdemocrat.com/article/news/deputies-working-to-identify-suspects-in-camp-meeker-incident/) [dramatic police reaction](http://web.archive.org/web/20201112041007/https://www.pressdemocrat.com/article/news/authorities-id-four-arrested-in-westminster-woods-protest/) (SWAT team called, highway closure, children's group a mile away being evacuated—the works).
I was tempted to email links to the blog posts to the Santa Rosa _Press-Democrat_ reporter covering the incident (as part of my information-sharing-is-good virtue ethics), but decided to refrain because I predicted that Anna would prefer I didn't.
[^defensive]: Criticism is "defensive" in the sense of trying to _prevent_ new beliefs from being added to our shared map; a critic of an idea "wins" when the idea is not accepted (such that the set of accepted beliefs remains at the _status quo ante_).
-More significantly, in reaction to Yudkowsky's ["Meta-Honesty: Firming Up Honesty Around Its Edge Cases"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), I published ["Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly),[^not-lying-title] explaining why merely refraining from making false statments is an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people in practice without technically lying.
+More significantly, in reaction to Yudkowsky's ["Meta-Honesty: Firming Up Honesty Around Its Edge Cases"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), I published ["Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly),[^not-lying-title] explaining why merely refraining from making false statements is an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people in practice without technically lying.
[^not-lying-title]: The ungainly title was "softened" from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless".
On 14 December 2019, I wrote to Jessica and Jack Gallagher, another disaffected ex-MIRI researcher, asking how we should organize this. (Jessica and Jack had relevant testimony about working at MIRI, which would be of more central interest to "Ethan" than my story about how the "rationalists" had lost their way.) Michael also mentioned "Tabitha", a lawyer who had been in the MIRI orbit for a long time, as another person to talk to.
-About a week later, I apologized, saying that I wanted to postpone setting up the meeting, partially because I was on a roll with my productive blogging spree, and partially for a psychological reason: I was feeling subjective pressure to appease Michael by doing the thing that he explicitly suggested because of my loyalty to him, but that would be wrong, because Michael's ideology said that people should follow their sense of opportunity rather than obeying orders. I might feel motived to reach out to "Ethan" and "Tabitha" in January.
+About a week later, I apologized, saying that I wanted to postpone setting up the meeting, partially because I was on a roll with my productive blogging spree, and partially for a psychological reason: I was feeling subjective pressure to appease Michael by doing the thing that he explicitly suggested because of my loyalty to him, but that would be wrong, because Michael's ideology said that people should follow their sense of opportunity rather than obeying orders. I might feel motivated to reach out to "Ethan" and "Tabitha" in January.
Michael said that implied that my sense of opportunity was driven by politics, and that I believed that simple honesty couldn't work; he only wanted me to acknowledge that. I was not inclined to affirm that characterization; it seemed like any conversation with "Ethan" and "Tabitha" would be partially optimized to move money, which I thought was politics.
Jessica pointed out that "it moves money, so it's political" was erasing the non-zero-sum details of the situation. If people can make better decisions (including monetary ones) with more information, then informing them was pro-social. If there wasn't any better decisionmaking from information to be had, and all speech was just a matter of exerting social pressure in favor of one donation target over another, then that would be politics.
-I agreed that my initial "it moves money so it's political" intuition was wrong. But I didn't think I knew how to inform people about giving decisions in an honest and timely way, because the arguments [written above the bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) were an entire traumatic worldview shift. You couldn't just say "CfAR is fraudulent, don't give to them" without explaining things like ["bad faith is a disposition, not a feeling"](http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/) as prerequisites. I felt more comfortable trying to share the worldview update in January even if it meant the December decision would be wrong, because I didn't know how to affect the December decision in a way that didn't require someone to trust my judgement.
+I agreed that my initial "it moves money so it's political" intuition was wrong. But I didn't think I knew how to inform people about giving decisions in an honest and timely way, because the arguments [written above the bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) were an entire traumatic worldview shift. You couldn't just say "CfAR is fraudulent, don't give to them" without explaining things like ["bad faith is a disposition, not a feeling"](http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/) as prerequisites. I felt more comfortable trying to share the worldview update in January even if it meant the December decision would be wrong, because I didn't know how to affect the December decision in a way that didn't require someone to trust my judgment.
Michael wrote:
I appreciated the gesture of getting real data, but I was deeply unimpressed with Alexander's analysis for reasons that I found difficult to write up in a timely manner. Three and a half years later, I eventually got around to [polishing my draft and throwing it up as a standalone post](/2023/Nov/reply-to-scott-alexander-on-autogenderphilia/).
-Briefly, based on eyballing the survey data, Alexander proposes "if you identify as a gender, and you're attracted to that gender, it's a natural leap to be attracted to yourself being that gender" as a "very boring" theory, but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is not boring and takes on a big complexity penalty: I don't think the group of gay men _and_ lesbians _and_ straight males with female gender identities _and_ straight females with male gender identities have much in common with each other, except sociologically (being "queer"), and by being human.
+Briefly, based on eyeballing the survey data, Alexander proposes "if you identify as a gender, and you're attracted to that gender, it's a natural leap to be attracted to yourself being that gender" as a "very boring" theory, but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is not boring and takes on a big complexity penalty: I don't think the group of gay men _and_ lesbians _and_ straight males with female gender identities _and_ straight females with male gender identities have much in common with each other, except sociologically (being "queer"), and by being human.
(I do like the [hypernym](https://en.wikipedia.org/wiki/Hyponymy_and_hypernymy) _autogenderphilia_.)
Crucially, if innate gender identity isn't a feature of toddler psychology, _the child has no way to know anything is "wrong."_ If none of the grown-ups can say, "You're a boy because boys are the ones with penises" (because that's not what people are supposed to believe in the current year), how is the child supposed to figure that out independently? [Toddlers are not very sexually dimorphic](/2019/Jan/the-dialectic/), but large sex differences in play style and social behavior tend to emerge within a few years. (There were no cars in the environment of evolutionary adaptedness, and yet [the effect size of the sex difference in preference for toy vehicles is a massive _d_ ≈ 2.44](/papers/davis-hines-how_large_are_gender_differences_in_toy_preferences.pdf), about one and a half times the size of the sex difference in adult height.)
-What happens when the kid develops a self-identity as "a girl", only to find out, potentially years later, that she noticeably doesn't fit in with the (cis) girls on the [many occasions that no one has explicitly spelled out in advance](/2019/Dec/more-schelling/) where people are using "gender" (percieved sex) to make a prediction or decision?
+What happens when the kid develops a self-identity as "a girl", only to find out, potentially years later, that she noticeably doesn't fit in with the (cis) girls on the [many occasions that no one has explicitly spelled out in advance](/2019/Dec/more-schelling/) where people are using "gender" (perceived sex) to make a prediction or decision?
Some might protest, "But what's the harm? She can always change her mind later if she decides she's actually a boy." I don't doubt that if the child were to clearly and distinctly insist, "I'm definitely a boy," the nice smart liberal grown-ups would unhesitatingly accept that.
Suppose that, around the time of the social transition, the child reportedly responded to "hey kiddo, I love you" with, "I'm a girl and I'm a vegetarian." In the skeptic's view, both halves of that sentence were probably generated by the same cognitive algorithm—probably something like, practice language and be cute to caregivers, making use of themes from the local culture environment (where grown-ups in Berkeley talk a lot about gender and animal welfare). If you're not going to change the kid's diet on the basis of the second part, you shouldn't social transition the kid on the basis of the first part.
-It's not hard to imagine how differential treatment by grown-ups of gender-related utterances could unintentionally shape outcomes. This may be clearer if we imagine a non-gender-related case. Suppose the child's father's name is Kevin Smith, and that after a grown-up explains ["Sr."/"Jr." generational suffixes](https://en.wikipedia.org/wiki/Suffix_(name)#Generational_titles) after it [happened to come up in fiction](https://wreckitralph.fandom.com/wiki/Fix-It_Felix,_Jr._(character)), the child declares that his name is Kevin Smith, Jr. now. Caregivers are likely to treat this as just a cute thing that the kid said, quickly forgotten by all. But if caregivers feared causing psychological harm by denying a declared name change, one could imagine them taking the child's statment as a prompt to ask followup questions. ("Oh, would you like me to call you _Kevin_ or _Kev Jr._, or just _Junior_?") With enough followup, it seems entirely plausible that a name change to "Kevin Jr." would meet with the child's assent and "stick" socially. The initial suggestion would have come from the child, but most of the [optimization](https://www.lesswrong.com/posts/D7EcMhL26zFNbJ3ED/optimization)—the selection that this particular one of the child's many statements should be taken literally and reinforced as a social identity, while others are just treated a cute thing the kid said—would have come from the adults.
+It's not hard to imagine how differential treatment by grown-ups of gender-related utterances could unintentionally shape outcomes. This may be clearer if we imagine a non-gender-related case. Suppose the child's father's name is Kevin Smith, and that after a grown-up explains ["Sr."/"Jr." generational suffixes](https://en.wikipedia.org/wiki/Suffix_(name)#Generational_titles) after it [happened to come up in fiction](https://wreckitralph.fandom.com/wiki/Fix-It_Felix,_Jr._(character)), the child declares that his name is Kevin Smith, Jr. now. Caregivers are likely to treat this as just a cute thing that the kid said, quickly forgotten by all. But if caregivers feared causing psychological harm by denying a declared name change, one could imagine them taking the child's statement as a prompt to ask followup questions. ("Oh, would you like me to call you _Kevin_ or _Kev Jr._, or just _Junior_?") With enough followup, it seems entirely plausible that a name change to "Kevin Jr." would meet with the child's assent and "stick" socially. The initial suggestion would have come from the child, but most of the [optimization](https://www.lesswrong.com/posts/D7EcMhL26zFNbJ3ED/optimization)—the selection that this particular one of the child's many statements should be taken literally and reinforced as a social identity, while others are just treated a cute thing the kid said—would have come from the adults.
Finally, there is the matter of the child's behavior and personality. For example, around the same time that the child's social transition was going down, the father reported the child being captivated by seeing a forklift at Costco. A few months later, another family friend remarked that maybe the child is very competitive, and that "she likes fighting so much because it's the main thing she knows of that you can _win_".
But if you do have the math, a moment of introspection will convince you that the analogy between category "boundaries" and national borders is shallow.
-A two-dimensional political map tells you which areas of the Earth's surface are under the jurisdiction of which government. In contrast, category "boundaries" tell you which regions of very high-dimensional configuration space correspond to a word/concept, which is useful _because_ that structure is useful for making probabilistic inferences: you can use your observastions of some aspects of an entity (some of the coordinates of a point in configuration space) to infer category-membership, and then use category membership to make predictions about aspects that you haven't yet observed.
+A two-dimensional political map tells you which areas of the Earth's surface are under the jurisdiction of which government. In contrast, category "boundaries" tell you which regions of very high-dimensional configuration space correspond to a word/concept, which is useful _because_ that structure is useful for making probabilistic inferences: you can use your observations of some aspects of an entity (some of the coordinates of a point in configuration space) to infer category-membership, and then use category membership to make predictions about aspects that you haven't yet observed.
But the trick only works to the extent that the category is a regular, non-squiggly region of configuration space: if you know that egg-shaped objects tend to be blue, and you see a black-and-white photo of an egg-shaped object, you can get close to picking out its color on a color wheel. But if egg-shaped objects tend to blue _or_ green _or_ red _or_ gray, you wouldn't know where to point to on the color wheel.
I decided on "Unnatural Categories Are Optimized for Deception" as the title for my advanced categorization thesis. Writing it up was a major undertaking. There were a lot of nuances to address and potential objections to preëmpt, and I felt that I had to cover everything. (A reasonable person who wanted to understand the main ideas wouldn't need so much detail, but I wasn't up against reasonable people who wanted to understand.)
-In September 2020, Yudkowsky Tweeted [something about social media incentives prompting people to make nonsense arguments](https://twitter.com/ESYudkowsky/status/1304824253015945216), and something in me boiled over. The Tweet was fine in isolation, but I rankled at it given the absurdly disproprotionate efforts I was undertaking to unwind his incentive-driven nonsense. I left [a pleading, snarky reply](https://twitter.com/zackmdavis/status/1304838486810193921) and [vented on my own timeline](https://twitter.com/zackmdavis/status/1304838346695348224) (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):
+In September 2020, Yudkowsky Tweeted [something about social media incentives prompting people to make nonsense arguments](https://twitter.com/ESYudkowsky/status/1304824253015945216), and something in me boiled over. The Tweet was fine in isolation, but I rankled at it given the absurdly disproportionate efforts I was undertaking to unwind his incentive-driven nonsense. I left [a pleading, snarky reply](https://twitter.com/zackmdavis/status/1304838486810193921) and [vented on my own timeline](https://twitter.com/zackmdavis/status/1304838346695348224) (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):
> Who would have thought getting @ESYudkowsky's robot cult to stop trying to trick me into cutting my dick off (independently of the empirical facts determining whether or not I should cut my dick off) would involve so much math?? OK, I guess the math part isn't surprising, but—[^trying-to-trick-me]
>
> [... redacted ...]
>
-> But if behavior at equilibrium isn't deceptive, there's just _no such thing as deception_; I wrote about this on Less Wrong in ["Maybe Lying Can't Exist?!"](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) (drawing on the academic literature about sender–reciever games). I don't think you actually want to bite that bullet?
+> But if behavior at equilibrium isn't deceptive, there's just _no such thing as deception_; I wrote about this on Less Wrong in ["Maybe Lying Can't Exist?!"](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) (drawing on the academic literature about sender–receiver games). I don't think you actually want to bite that bullet?
>
> **In terms of information transfer, there is an isomorphism between saying "I reserve the right to lie 5% of the time about whether something is a member of category C" and adopting a new definition of C that misclassifies 5% of instances with respect to the old definition.**
>
>
> **It makes sense that you don't want to get involved in gender politics. That's why I wrote "... Boundaries?" using examples about dolphins and job titles, and why my forthcoming post has examples about bleggs and artificial meat.** This shouldn't be _expensive_ to clear up?! This should take like, five minutes? (I've spent twenty-one months of my life on this.) Just one little _ex cathedra_ comment on Less Wrong or _somewhere_ (**it doesn't have to be my post, if it's too long or I don't deserve credit or whatever**; I just think the right answer needs to be public) affirming that you haven't changed your mind about 37 Ways Words Can Be Wrong? Unless you _have_ changed your mind, of course?
>
-> I can imagine someone observing this conversation objecting, "[...] why are you being so greedy? We all know the _real_ reason you want to clear up this philosophy thing in public is because it impinges on your gender agenda, but Eliezer _already_ threw you a bone with the ['there's probably more than one type of dypshoria' thing.](https://twitter.com/ESYudkowsky/status/1108277090577600512) That was already a huge political concession to you! That makes you _more_ than even; you should stop being greedy and leave Eliezer alone."
+> I can imagine someone observing this conversation objecting, "[...] why are you being so greedy? We all know the _real_ reason you want to clear up this philosophy thing in public is because it impinges on your gender agenda, but Eliezer _already_ threw you a bone with the ['there's probably more than one type of dysphoria' thing.](https://twitter.com/ESYudkowsky/status/1108277090577600512) That was already a huge political concession to you! That makes you _more_ than even; you should stop being greedy and leave Eliezer alone."
>
> But as [I explained in my reply](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) criticizing why I think that argument is _wrong_, the whole mindset of public-arguments-as-political-favors is _crazy_. **The fact that we're having this backroom email conversation at all (instead of just being correct about the philosophy of language on Twitter) is _corrupt_!** I don't want to strike a deal in a political negotiation; I want _shared maps that reflect the territory_. I thought that's what this "rationalist community" thing was supposed to do? Is that not a thing anymore? If we can't do the shared-maps thing when there's any hint of political context (such that now you _can't_ clarify the categories thing, even as an abstract philosophy issue about bleggs, because someone would construe that as taking a side on whether trans people are Good or Bad), that seems really bad for our collective sanity?! (Where collective sanity is potentially useful for saving the world, but is at least a quality-of-life improver if we're just doomed to die in 15 years no matter what.)
>
> you are being the bad guy if you try to shut down that conversation by saying that "I can define the word 'woman' any way I want"
-There it is! A clear _ex cathedra_ statement that gender categories are not an exception to the general rule that categories aren't arbitary. (Only 1 year and 8 months after [asking for it](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#ex-cathedra-statement-ask).) I could quibble with some of Yudkowsky's exact writing choices, which I thought still bore the signature of political maneuvering, but it would be petty to dwell on quibbles when the core problem had been addressed.
+There it is! A clear _ex cathedra_ statement that gender categories are not an exception to the general rule that categories aren't arbitrary. (Only 1 year and 8 months after [asking for it](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#ex-cathedra-statement-ask).) I could quibble with some of Yudkowsky's exact writing choices, which I thought still bore the signature of political maneuvering, but it would be petty to dwell on quibbles when the core problem had been addressed.
I wrote to Michael, Ben, Jessica, Sarah, and "Riley", thanking them for their support. After successfully bullying Scott and Eliezer into clarifying, I was no longer at war with the robot cult and feeling a lot better (Subject: "thank-you note (the end of the Category War)").
I was charged by members of the "Vassarite" clique with the duty of taking care of a mentally-ill person at my house on 18 December 2020. (We did not trust the ordinary psychiatric system to act in patients' interests.) I apparently did a poor job, and ended up saying something callous on the care team group chat after a stressful night, which led to a chaotic day on the nineteenth, and an ugly falling-out between me and the group. In the interests of brevity and the privacy of the person we were trying to help, I think it's better that I don't give you a play-by-play. The details aren't particularly of public interest.
-My poor performance during this incident [weighs on my conscience](/2020/Dec/liability/) particularly because I had previously been in the position of being crazy and benefitting from the help of my friends (including many of the same people involved in this incident) rather than getting sent back to psychiatric prison ("hospital", they call it a "hospital"). Of all people, I had a special debt to "pay it forward", and one might have hoped that I would also have special skills, that remembering being on the receiving end of a psychiatric tripsitting operation would help me know what to do on the giving end. Neither of those panned out.
+My poor performance during this incident [weighs on my conscience](/2020/Dec/liability/) particularly because I had previously been in the position of being crazy and benefiting from the help of my friends (including many of the same people involved in this incident) rather than getting sent back to psychiatric prison ("hospital", they call it a "hospital"). Of all people, I had a special debt to "pay it forward", and one might have hoped that I would also have special skills, that remembering being on the receiving end of a psychiatric tripsitting operation would help me know what to do on the giving end. Neither of those panned out.
Some might appeal to the proverb, "All's well that ends well", noting that the person in trouble ended up recovering, and that, while the stress contributed to me having a somewhat serious relapse of some of my own psychological problems on the night of the nineteenth and in the following weeks, I ended up recovering, too. I am instead inclined to dwell on [another proverb](https://www.alessonislearned.com/), "A lesson is learned but the damage is irreversible."