Ben had previously worked at GiveWell and had written a lot about problems with the Effective Altruism (EA) movement; in particular, he argued that EA-branded institutions were making [incoherent](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [decisions](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) under the influence of incentives to [distort](http://benjaminrosshoffman.com/humility-argument-honesty/) [information](http://benjaminrosshoffman.com/honesty-and-perjury/) [in order to](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) [seek](http://benjaminrosshoffman.com/against-neglectedness/) [power](http://benjaminrosshoffman.com/against-responsibility/).
-Jessica had previously worked at MIRI, where she was unnerved by what she saw as under-evidenced paranoia about information hazards and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam). (As Jack Gallagher, who was also at MIRI at the time, [put it](https://www.greaterwrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards/comment/TcsXh44pB9xRziGgt), "A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.")
+Jessica had previously worked at MIRI, where she was unnerved by what she saw as under-evidenced paranoia about information hazards and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam). (As Jack Gallagher, who was also at MIRI at the time, [later put it](https://www.greaterwrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards/comment/TcsXh44pB9xRziGgt), "A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.")
To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of the same underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit?
-If there was a real problem, I didn't have a good grasp on it. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people and get a correction on that specific point. (Although as we had just discovered, that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt).
+If there was a real problem, I didn't have a good grasp on it. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people and get a correction on that specific point. (Although as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt).
Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in Vernor Vinge's _A Fire Upon the Deep_. The problem wasn't that people were getting dumber; it was that they were increasingly behaving in a way that was better explained by their political incentives than by coherent beliefs about the world; they were using and construing facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game.
I thought explaining the Blight to an ordinary grown-up was going to need either lots of specific examples that were more egregious than this (and more egregious than the examples in Sarah Constantin's ["EA Has a Lying Problem"](https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html) or Ben's ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world with unusually high standards.
-The schism introduced new pressures on my social life. On 20 April 2019, I told Michael that I still wanted to be friends with people on both sides of the factional schism. Michael said that we should unambiguously regard Yudkowsky and CfAR president (and my personal friend of ten years) Anna Salamon as criminals or enemy combatants who could claim no rights in regard to me or him.
+The schism introduced new pressures on my social life. I told Michael that I still wanted to be friends with people on both sides of the factional schism. Michael said that we should unambiguously regard Yudkowsky and CfAR president (and my personal friend of ten years) Anna Salamon as criminals or enemy combatants who could claim no rights in regard to me or him.
I don't think I got the framing at this time. War metaphors sounded scary and mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soldiers on the other side of a war aren't necessarily morally blameworthy as individuals:[^soldiers] their actions are being directed by the Power they're embedded in.
> If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile.
- I claim that I was meeting this standard: I _was_ willing to personally fix the philosophy-of-categorization issue no matter how long it took, and the issue _did_ arise from outright bad faith.
+ I claim that I was meeting this standard: I _was_ willing to personally fix the philosophy-of-categorization issue no matter how much effort it took, and the issue _did_ arise from outright bad faith.
-And as it happened, on 4 May 2019, Yudkowsky [retweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was sort of like the clarification I had been asking for. (Empirically, it made me feel less aggrieved.) Was I wrong to interpret this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)
+And as it happened, on 4 May 2019, Yudkowsky [retweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences [aren't a matter of any single variable](https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy-1)—which was thematically similar to the clarification I had been asking for. (Empirically, it made me feel less aggrieved.) Was I wrong to interpret this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)
Separately, one evening in April, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, "[Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're the only ones backing me up on this incredibly basic philosophy thing and I don't feel like I have anywhere else to go." This culminated in a group conversation with the entire house, which I found unsettling. (Unfortunately, I didn't take notes and don't remember the details except that I had a sense of everyone else seeming to agree on things that I thought were clearly contrary to the spirit of the Sequences.)
And as it happened, on 7 May 2019, Kelsey wrote [a Facebook comment displaying evidence of understanding my thesis](/images/piper-spending_social_capital_on_talking_about_trans_issues.png).
-These two datapoints led me to a psychological hypothesis: when people see someone of some value wavering between their coalition and a rival coalition, they're intuitively motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford to [speak as if she didn't understand the thing about sex being a natural category](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#people-who-would-get-surgery-to-have-the-ideal-female-body) when it was just me freaking out alone, but visibly got it almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes). Maybe my "closing thoughts" email had a similar effect on Yudkowsky, assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later? This probably wouldn't work if you repeated it, or tried to do it consciously?
+These two datapoints led me to a psychological hypothesis: when people see someone wavering between their coalition and a rival coalition, they're intuitively motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford to [speak as if she didn't understand the thing about sex being a natural category](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#people-who-would-get-surgery-to-have-the-ideal-female-body) when it was just me freaking out alone, but visibly got it almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes). Maybe my "closing thoughts" email had a similar effect on Yudkowsky, assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later? This probably wouldn't work if you repeated it, or tried to do it consciously?
### Exit Wounds (May 2019)
I asked my boss to temporarily assign me some easier tasks that I could make steady progress on. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired, it was better to be up-front about how I could best serve the company given that impairment, rather than hoping the boss wouldn't notice.
-My intent of a break from the religious war didn't take. I met with Anna on the UC Berkeley campus and read her excerpts from Ben's and Jessica's emails. (She had not provided a comment on ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) despite my requests, including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; spamming people with hysterical and somewhat demanding postcards felt more distinctive than my usual habit of spamming people with hysterical and somewhat demanding emails.)
+My intent of a break from the religious war didn't take. I met with Anna on the UC Berkeley campus and read her excerpts from Ben's and Jessica's emails. (She had not provided a comment on ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) despite my requests, including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; spamming people with hysterical and somewhat demanding postcards felt more distinctive than spamming people with hysterical and somewhat demanding emails.)
I complained that I had believed our own [marketing](https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-and-create-the-art) [material](https://www.lesswrong.com/posts/jP583FwKepjiWbeoQ/epistle-to-the-new-york-less-wrongians) about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about "the thing" was never actually part of the original vision. She kept repeating that she had tried to warn me, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture and then not say them, in order to avoid getting drawn into pointless conflicts.)
-It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) in our own tiny little bubble of the world?
+It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy of language and categorization (not my object-level special interest in sex and gender). Why was it so unrealistic to imagine that the smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) in our own tiny little bubble?
My frustration bubbled out into follow-up emails:
> Can you please _acknowledge that I didn't just make this up?_ Happy to pay you $200 for a reply to this email within the next 72 hours
-Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) offers from me anymore; previously, she had regarded my occasional custom of throwing money at her to bid for her scarce attention as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)) and it shouldn't be surprising if people steered clear of controversy.
+Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) offers from me anymore; previously, she had regarded my occasionally throwing money at her to bid for her scarce attention[^money-attitudes] as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)) and it shouldn't be surprising if people steered clear of controversy.
+
+[^money-attitudes]: Anna was a very busy person who I assumed didn't always have time for me, and I wasn't earning-to-give [anymore](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/) after my 2017 psych ward experience made me more skeptical about institutions (including EA charities) doing what they claimed. Now that I'm not currently dayjobbing, I wish I had been somewhat less casual about spending money during this period.
I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit an epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't correct the mistake even after it was pointed out, then the "rationalists" were worse than useless to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I were part of [a reference class that included AI researchers](/2017/Jan/from-what-ive-tasted-of-desire/)).
-Fundamentally, I was skeptical that you _could_ do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna was unusually skillful at thinking things without saying them; I thought people facing similar speech restrictions generally just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
+Fundamentally, I was skeptical that you _could_ do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna was unusually skillful at thinking things without saying them; I thought people facing similar speech restrictions generally just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not talking about atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).
[^plausibly]: I was still deep enough in my hero worship that I wrote "plausibly" in an email at the time. Today, I would not consider the adverb necessary.
MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if you believed that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and mentioned it in a comment: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No).
-Someone [objected that](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/FxSZwECjhgYE7p2du) she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." I [replied](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/32GPaijsSwX2NSFJi) that that was understandable, but that I hoped it was also understandable that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people.
-
-Such a trainwreck ensued that the mods manually [moved the comments to their own post](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019). Based on the karma scores and what was said,[^yes-requires-slapfight-highlights] I count it as a victory.
+Someone [objected that](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/FxSZwECjhgYE7p2du) she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." I [replied](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/32GPaijsSwX2NSFJi) that that was understandable, but that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people. Such a trainwreck ensued that the mods manually [moved the comments to their own post](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019). Based on the karma scores and what was said,[^yes-requires-slapfight-highlights] I count it as a victory.
[^yes-requires-slapfight-highlights]: I particularly appreciated Said Achmiz's [defense of disregarding community members' feelings](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/EsSdLMrFcCpSvr3pG), and [Ben's commentary on speech acts that lower the message length of proposals to attack some group](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/TXbgr7goFtSAZEvZb).
On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or I could call it a blatant lie because no rule of rationality says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, [was persuaded](https://www.greaterwrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for/comment/oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."
-But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work is. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software."
+But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work is. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve Society's collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software."
-I said I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a goddamned _MIRI research associate_? Not to demonize that commenter, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could beat it out of me.
+I said I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a _MIRI research associate_? Not to demonize that commenter, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could beat it out of me.
-Steven objected that tractability and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's gravitational field directly impedes NASA's mission, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort trying to reduce the Earth's gravity (_viz._, zero).
+Steven objected that tractability and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's gravitational field directly impedes NASA's mission, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort (_viz._, zero) trying to reduce the Earth's gravity.
-I agreed that tractability needs to be addressed, but the situation felt analogous to being in [a coal mine in which my favorite of our canaries had just died](https://en.wikipedia.org/wiki/Sentinel_species). Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The [causal graph](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue?
+I agreed that tractability needed to be addressed, but the situation felt analogous to being in [a coal mine in which my favorite of our canaries had just died](https://en.wikipedia.org/wiki/Sentinel_species). Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The [causal graph](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) is the fork "canary death ← mine gas → danger" rather than the direct link "canary death → danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue?
-Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/) and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to be deeply relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break.
+Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/) and [some probability theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to be deeply relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break.
In June 2019, I made [a linkpost on _Less Wrong_](https://www.lesswrong.com/posts/5nH5Qtax9ae8CQjZ9/tal-yarkoni-no-it-s-not-the-incentives-it-s-you) to Tal Yarkoni's ["No, It's Not The Incentives—It's you"](https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/), about how professional scientists should stop using career incentives as an excuse for doing poor science. It generated a lot of discussion.
Ben said that trying to discuss with the _Less Wrong_ mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." The premise should be, "If this is within the Overton window for _Less Wrong_ moderators, there's a serious confusion on the conditions required for discourse"—scapegoating individuals wasn't part of it. He was less optimistic about harm reduction; participating on the site was implicitly endorsing it by submitting to the rule of the karma and curation systems.
-"Riley" expressed sadness about how the discussion on "The Incentives" demonstrated that the community they loved—including dear friends—was in a bad way. Michael (in a separate private discussion) had said he was glad to hear about the belief-update. "Riley" said that Michael saying that also made them sad, because it seemed discordant to be happy about sad news. Michael wrote (in the thread):
+"Riley" expressed sadness about how the discussion on "The Incentives" demonstrated that the community they loved—including dear friends—was in a bad way. Michael (in a separate private discussion) had said he was glad to hear about the belief-update. "Riley" said that Michael saying that also made them sad, because it seemed discordant to be happy about sad news. Michael wrote:
> I['m] sorry it made you sad. From my perspective, the question is no[t] "can we still be friends with such people", but "how can we still be friends with such people" and I am pretty certain that understanding their perspective [is] an important part of the answer. If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship.
(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the _Less Wrong_ FAQ. Maybe he couldn't let me win again so quickly?)
-I emailed the posse about the thread, on the grounds that gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices. Meanwhile on _Less Wrong_, Ruby kept doubling down:
+I emailed the posse about the thread, on the grounds that gauging the psychology of the mod team was relevant to our upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices. Meanwhile on _Less Wrong_, Ruby kept doubling down:
> [I]f the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. [...]
>
"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR _for_?
-[I replied to Ruby that](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack!
-
-Jessica said that there's no point in getting mad at [MOPs](http://benjaminrosshoffman.com/construction-beacons/). I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was explicitly about causal _vs._ social reality; it's possible that I wouldn't have been such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt.
+[I replied to Ruby that](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack! I thought it was ironic that this happened on a post that was explicitly about causal _vs._ social reality; it's possible that I wouldn't have been so rigid about this if it weren't for that prompt.
(On reviewing the present post prior to publication, Ruby writes that he regrets his behavior during this exchange.)
Jessica ended up writing a post, ["Self-Consciousness Wants Everything to Be About Itself"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself), arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings. She used as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community.
-Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in a subsequent apology to me](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post) and by [an exchange between Ray and Ruby that she thought was "surprisingly okay"](https://www.greaterwrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself/comment/EW3Mom9qfoggfBicf).
+Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in a subsequent comment to me](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post) and by [an exchange between Ray and Ruby that she thought was "surprisingly okay"](https://www.greaterwrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself/comment/EW3Mom9qfoggfBicf).
From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it can help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better.
-Michael said that part of the reason this worked was because it represented a clear threat of scapegoating without actually scapegoating, without surrendering the option to do so later; it was significant that Jessica's choice of example positioned her on the side of the powerful social-justice coalition.
+Michael said that part of the reason this worked was because it represented a clear threat of scapegoating without actually scapegoating and without surrendering the option to do so later; it was significant that Jessica's choice of example positioned her on the side of the powerful social-justice coalition.
------
-------
-Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent prominence of "short" (_e.g._, 2030) AI timelines was better explained by political factors than by technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project.
+Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent prominence of "short" (_e.g._, 2030) timelines to transformative AI was better explained by political factors than by technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project.
(Remember, this was 2019. After seeing what GPT-3, [DALL-E](https://openai.com/research/dall-e), [PaLM](https://arxiv.org/abs/2204.02311), _&c._ could do during the "long May 2020", it now looks to me that the short-timelines people had better intuitions than Jessica gave them credit for.)
I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") and premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is taking on some nonzero risk of committing vehicular manslaughter.) The manslaughter example was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the weak principle that intent sometimes matters.
-[^manslaughter-disanalogy]: For one extremely important disanalogy, perps don't gain from committing manslaughter.
+[^manslaughter-disanalogy]: For one important disanalogy, perps don't gain from committing manslaughter.
Ben replied that what mattered in the determination of manslaughter _vs._ murder was whether there was long-horizon optimization power toward the outcome of someone's death, not what sentiments the killer rehearsed in their working memory.
-Michael made an analogy between EA and Catholicism on a phone call later. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope had to know on some level that it wasn't true. (I agreed that this usage of "fraud" made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that.
+On a phone call later, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope had to know on some level that it wasn't true. (I agreed that this usage of "fraud" made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that.
------
-Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft _Less Wrong_ post by some of our posse members and some of the _Less Wrong_ mods. (The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party [GreaterWrong](https://www.greaterwrong.com/) site; I [filed a bug](https://github.com/ForumMagnum/ForumMagnum/issues/2161).)
+Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft _Less Wrong_ post by some of our posse members and some of the _Less Wrong_ mods.[^hidden-draft]
+
+[^hidden-draft]: The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party [GreaterWrong](https://www.greaterwrong.com/) site; I [filed a bug](https://github.com/ForumMagnum/ForumMagnum/issues/2161).
Ben wrote:
Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene were still just a philosophy club. This was catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it _more_ important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory behavior that philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately punish people for getting caught up in obfuscatory patterns; that would just increase the incentive to obfuscate. But we did need some way to reveal what was going on.
-In email, Jessica acknowledged that Ray had a point: it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate "You don't have the option of non-engagement with the complaints that are being made." (Courts can _summon_ people; you can't ignore a court summons the way you can ignore ordinary critics.)
+In email, Jessica acknowledged that Ray had a point that it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate "You don't have the option of non-engagement with the complaints that are being made." (Courts can _summon_ people; you can't ignore a court summons the way you can ignore ordinary critics.)
Michael said that we should also develop skill in using social-justicey blame language, as was used against us, harder, while we still thought of ourselves as [trying to correct people's mistakes rather than being in a conflict](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) against the Blight. "Riley" said that this was a terrifying you-have-become-the-abyss suggestion; Ben thought it was obviously a good idea.
-I was horrified by the extent to which _Less Wrong_ moderators (!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic [Something to Protect](/2019/Jul/the-source-of-our-power/) as a simple matter of Bay Area political correctness. I was happy to have Michael/Ben/Jessica as allies, but I hadn't been seeing the Blight as a unified problem. Now I was seeing _something_.
+I was horrified by the extent to which _Less Wrong_ moderators (!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic [Something to Protect](/2019/Jul/the-source-of-our-power/) as a simple matter of Bay Area political correctness. I was happy to have Michael, Ben, and Jessica as allies, but I hadn't been seeing the Blight as a unified problem. Now I was seeing _something_.
An in-person meeting was arranged for 23 July 2019 at the _Less Wrong_ office, with Ben, Jessica, me, and most of the _Less Wrong_ team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to faithfully recount it.[^memory] I ended up crying at one point and left the room for a while.
Empirically, no! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are [mistakenly](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) failing to live up to the narrative" and "[Everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing.
-It was the same thing here. Kelsey said that it was predictable that Yudkowsky wouldn't make a public statement, even one as uncontroversial as "category boundaries should be drawn for epistemic and not instrumental reasons," because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion in April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[^statement]
+It was the same thing here. Kelsey said that it was predictable that Yudkowsky wouldn't make a public statement, even one as basic as "category boundaries should be drawn for epistemic and not instrumental reasons," because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion in April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[^statement]
[^statement]: Oddly, Kelsey seemed to think the issue was that my allies and I were pressuring Yudkowsky to make a public statement, which he supposedly never does. From our perspective, the issue was that he _had_ made a statement and it was wrong.
-Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But if so, I had no reason to care what Eliezer Yudkowsky said, because not provoking SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise to me, even if Kelsey knew better.
+Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks from people who hate him anyway, and not the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But if so, _I had no reason to care what Eliezer Yudkowsky said_, because not provoking SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise to me, even if Kelsey knew better.
What Kelsey saw as "Zack is losing his ability to model other people and I'm worried about him," I thought Ben and Jessica would see as "Zack is angry about living in [simulacrum level 3](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/) and we're worried about _everyone else_."
I did think that Kelsey was mistaken about how much causality to attribute to Michael's influence, rather than to me already being socially retarded. From my perspective, validation from Michael was merely the catalyst that excited me from confused-and-sad to confused-and-socially-aggressive-about-it. The latter phase revealed a lot of information, and not just to me. Now I was ready to be less confused—after I was done grieving.
-Later, talking in person at "Arcadia", Kelsey told me that someone whose identity she would not disclose had threatened to sue over the report about Michael, so REACH was delaying its release. As far as my interest in defending Michael went, I counted this as short-term good news (because the report wasn't being published for now) but longer-term bad news (because the report must be a hit piece if Michael's mysterious ally was trying to hush it).
+Later, talking in person at "Arcadia", Kelsey told me that the REACH was delaying its release of its report about Michael because someone whose identity she could not disclose had threatened to sue. As far as my interest in defending Michael went, I counted this as short-term good news (because the report wasn't being published for now) but longer-term bad news (because the report must be a hit piece if Michael's mysterious ally was trying to hush it).
When I mentioned this to Michael on Signal on 3 August 2019, he replied:
I wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained. So instead, I mostly turned to a combination of writing [bitter](https://www.greaterwrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare/comment/Nhv9KPte7d5jbtLBv) and [insulting](https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7) [comments](https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE) whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging!
-In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passing mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occurred to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
+In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passing mention in ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries): sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was generalized from a "pro-trans" argument that had occurred to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).)
In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), I presented a toy mathematical model of how censorship distorts group beliefs. I was surprised by how well-received it was (high karma, Curated within a few days, later included in the Best-of-2019 collection), especially given that it was explicitly about politics (albeit at a meta level, of course). Ben and Jessica had discouraged me from bothering when I sent them a draft. (Jessica said that it was obvious even to ten-year-olds that partisan politics distorts impressions by filtering evidence. "[D]o you think we could get a ten-year-old to explain it to Eliezer Yudkowsky?" I asked.)
-In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. If a function faithfully passes its observations as input to another function, the second function can construct a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function comes up with a worse probability distribution.
+In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. If a function faithfully passes its observations as input to another function, the second function can construct a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function computes a worse probability distribution.
Also in October 2019, in ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist), I replied to Scott Alexander's ["Against Lie Inflation"](https://slatestarcodex.com/2019/07/16/against-lie-inflation/), which was itself a generalized rebuke of Jessica's "The AI Timelines Scam". Scott thought Jessica was wrong to use language like "lie", "scam", _&c._ to describe someone being (purportedly) motivatedly wrong, but not necessarily consciously lying.
Suppose there are five true heresies, but anyone who's on the record as believing more than one gets burned as a witch. Then it's [impossible to have a unified rationalist community](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting), because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy of categorization right in full generality, even though his writings revealed an implicit understanding of the correct way,[^implicit-understanding] and he and I had a common enemy in the social-justice egregore. He couldn't afford to. He'd already spent his Overton budget [on anti-feminism](https://slatestarcodex.com/2015/01/01/untitled/).
-[^implicit-understanding]: As I had [explained to him earlier](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#noncentral-fallacy), Alexander's famous [post on the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) condemned the same shenanigans he praised in the context of gender identity: Alexander's examples of the noncentral fallacy had largely been arguable edge-cases of a negative-valence category being inappropriately framed as typical (abortion is murder, taxation is theft), but "trans women are women" was the same thing, but with a positive-valence category.
+[^implicit-understanding]: As I had [explained to him earlier](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#noncentral-fallacy), Alexander's famous [post on the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) condemned the same shenanigans he praised in the context of gender identity: Alexander's examples of the noncentral fallacy had been about edge-cases of a negative-valence category being inappropriately framed as typical (abortion is murder, taxation is theft), but "trans women are women" was the same thing, but with a positive-valence category.
In ["Does the Glasgow Coma Scale exist? Do Comas?"](https://slatestarcodex.com/2014/08/11/does-the-glasgow-coma-scale-exist-do-comas/) (published just three months before "... Not Man for the Categories"), Alexander defends the usefulness of "comas" and "intelligence" in terms of their predictive usefulness. (The post uses the terms "predict", "prediction", "predictive power", _&c._ 16 times.) He doesn't say that the Glasgow Coma Scale is justified because it makes people happy for comas to be defined that way, because that would be absurd.
Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech,[^kolmogorov-common-interests-contrast] and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the one-heresy-per-thinker equilibrium.)
-[^kolmogorov-common-interests-contrast]: The last of the original Sequences had included a post, ["Rationality: Common Interest of Many Causes"](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes) which argued that different projects should not regard themselves "as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support." It was striking that the "Kolmogorov Option"-era Caliphate took the opposite policy: throwing politically unpopular projects (autogynephlia- or human-biodiversity-realism) under the bus to protect its own status.
+[^kolmogorov-common-interests-contrast]: The last of the original Sequences had included a post, ["Rationality: Common Interest of Many Causes"](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes), which argued that different projects should not regard themselves "as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support." It was striking that the "Kolmogorov Option"-era Caliphate took the opposite policy: throwing politically unpopular projects (like autogynephila- or human-biodiversity-realism) under the bus to protect its own status.
Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad. I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors [(most notably Mencius Moldbug)](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#unqualified-reservations) and from talking with "Thomas". I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#less-precise-is-more-violent) when I was talking about criticizing "rationalists".
Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.greaterwrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s/comment/TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.)
-I did believe that Yudkowsky believed that neoreaction was not worth engaging with. I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this author." I wasn't accusing him of being insincere.
+I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this author." I wasn't accusing Yudkowsky of being insincere.
-What I did think was that the need to keep up appearances of not being a right wing Bad Guy was a serious distortion of people's beliefs, because there are at least a few questions of fact where believing the correct answer can, in today's political environment, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to notice that this is a rationality problem and to not actively make the problem worse. I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that if I thought I'd learned valuable things from Moldbug, that made me less welcome in Yudkowsky's fiefdom.
+What I did think was that the need to keep up appearances of not being a right wing Bad Guy was a serious distortion of people's beliefs, because there are at least a few questions of fact where believing the correct answer can, in the political environment of the current year, be used to paint one as a right-wing Bad Guy. I would have hoped for Yudkowsky to [notice that this is a rationality problem](/2020/Aug/yarvin-on-less-wrong/) and to not actively make the problem worse. I was counting "I do not welcome support from those quarters" as making the problem worse insofar as it would seem to imply that if I thought I'd learned valuable things from Moldbug, that made me less welcome in Yudkowsky's fiefdom.
Yudkowsky certainly wouldn't endorse "Even learning things from these people makes you unwelcome" _as stated_, but "I do not welcome support from those quarters" still seemed like a pointlessly partisan silencing/shunning attempt, when one could just as easily say, "I'm not a neoreactionary, and if some people who read me are, that's _obviously not my fault_."
Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to also to acknowledge, _e.g._, racial IQ differences?
-<a id="tragedy-of-recursive-silencing"></a>I agreed that it would be helpful, but realistically, I didn't see why Yudkowsky should want to poke the race-differences hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, either you become an [evidence-filtering clever arguer](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence), or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.)
+<a id="tragedy-of-recursive-silencing"></a>I agreed that that would be better, but realistically, I didn't see why Yudkowsky should want to poke that hornet's nest. This was the tragedy of recursive silencing: if you can't afford to engage with heterodox ideas, either you become an [evidence-filtering clever arguer](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence), or you're not allowed to talk about anything except math. (Not even the relationship between math and human natural language, as we had found out recently.)
It was as if there was a "Say Everything" attractor and a "Say Nothing" attractor, and my incentives were pushing me towards the "Say Everything" attractor—but that was only because I had Something to Protect in the forbidden zone and I was a decent programmer (who could therefore expect to be employable somewhere, just as [James Damore eventually found another job](https://twitter.com/JamesADamore/status/1034623633174478849)). Anyone in less extreme circumstances would find themselves pushed toward the "Say Nothing" attractor.
>
> Also to be clear: I try not to dismiss ideas out of hand due to fear of public unpopularity. However I found Scott Alexander's takedown of neoreaction convincing and thus I shrugged and didn't bother to investigate further.
-My criticism regarding "negotiating with terrorists" did not apply to the 2013 disavowal. _More Right_ was brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, and the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing [McCarthyist persecution](https://www.unqualified-reservations.org/2013/09/technology-communism-and-brown-scare/).
+My criticism regarding negotiating with terrorists did not apply to the 2013 disavowal. _More Right_ was brand encroachment on Anissimov's part that Yudkowsky had a legitimate interest in policing, and the "I try not to dismiss ideas out of hand" disclaimer importantly avoided legitimizing [McCarthyist persecution](https://www.unqualified-reservations.org/2013/09/technology-communism-and-brown-scare/).
The question was, what had specifically happened in the last six years to shift Yudkowsky's opinion on neoreaction from (paraphrased) "Scott says it's wrong, so I stopped reading" to (verbatim) "actively hostile"? Note especially the inversion from (both paraphrased) "I don't support neoreaction" (fine, of course) to "I don't even want _them_ supporting _me_" ([which was bizarre](https://twitter.com/zackmdavis/status/1164329446314135552); humans with very different views on politics nevertheless have a common interest in not being transformed into paperclips).
[^pleonasm]: The pleonasm here ("to me" being redundant with "I thought") is especially galling coming from someone who's usually a good writer!
-It might seem like a little thing of no significance—requiring ["I" statements](https://en.wikipedia.org/wiki/I-message) is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern click. If everyone is forced to only make claims about their map ("_I_ think", "_I_ feel"), and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because [disagreement is disrespect](https://www.overcomingbias.com/p/disagreement-ishtml)), that's great for reducing social conflict, but not for the kind of collective information processing that accomplishes cognitive work,[^i-statements] like good literary criticism. A rationalist space needs to be able to talk about the territory.
+It might seem like a little thing of no significance—requiring ["I" statements](https://en.wikipedia.org/wiki/I-message) is commonplace in therapy groups and corporate sensitivity training—but this little thing coming from Eliezer Yudkowsky setting guidelines for an explicitly "rationalist" space made a pattern [click](https://www.lesswrong.com/posts/R3ATEWWmBhMhbY2AL/that-magical-click). If everyone is forced to only make claims about their map ("_I_ think", "_I_ feel") and not make claims about the territory (which could be construed to call other people's maps into question and thereby threaten them, because [disagreement is disrespect](https://www.overcomingbias.com/p/disagreement-ishtml)), that's great for reducing social conflict but not for the kind of collective information processing that accomplishes cognitive work,[^i-statements] like good literary criticism. A rationalist space needs to be able to talk about the territory.
[^i-statements]: At best, "I" statements make sense in a context where everyone's speech is considered part of the "official record". Wrapping controversial claims in "I think" removes the need for opponents to immediately object for fear that the claim will be accepted onto the shared map.
"Broadcast criticism is adversely selected for critic errors," Yudkowsky wrote in the post on reducing negativity, correctly pointing out that if a work's true level of mistakenness is _M_, the _i_-th commenter's estimate of mistakenness has an error term of _E<sub>i</sub>_, and commenters leave a negative comment when their estimate _M_ + _E<sub>i</sub>_ is greater than their threshold for commenting _T<sub>i</sub>_, then the comments that get posted will have been selected for erroneous criticism (high _E<sub>i</sub>_) and commenter chattiness (low _T<sub>i</sub>_).
-I can imagine some young person who liked _Harry Potter and the Methods_ being intimidated by the math notation, and uncritically accepting this wisdom from the great Eliezer Yudkowsky as a reason to be less critical, specifically. But a somewhat less young person who isn't intimidated by math should notice that this is just [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean). The same argument applies to praise!
+I can imagine some young person who liked [_Harry Potter and the Methods_](https://hpmor.com/) being intimidated by the math notation and indiscriminately accepting this wisdom from the great Eliezer Yudkowsky as a reason to be less critical, specifically. But a somewhat less young person who isn't intimidated by math should notice that this is just [regression to the mean](https://en.wikipedia.org/wiki/Regression_toward_the_mean). The same argument applies to praise!
What I would hope for from a rationality teacher and a rationality community, would be efforts to instill the general skill of modeling things like regression to the mean and selection effects, as part of the general project of having a discourse that does collective information-processing.
-And from the way Yudkowsky writes these days, it looks like he's ... not interested in collective information-processing? Or that he doesn't actually believe that's a real thing? "Credibly helpful unsolicited criticism should be delivered in private," he writes! I agree that the positive purpose of public criticism isn't solely to help the author. (If it were, there would be no reason for anyone but the author to read it.) But readers _do_ benefit from insightful critical commentary. (If they didn't, why would they read the comments section?) When I read a story, and am interested in reading the comments _about_ a story, it's because I'm interested in the thoughts of other readers, who might have picked up subtleties I missed. I don't want other people to self-censor comments on any plot holes or [Fridge Logic](https://tvtropes.org/pmwiki/pmwiki.php/Main/FridgeLogic) they noticed for fear of dampening someone else's enjoyment or hurting the author's feelings.
+And from the way Yudkowsky writes these days, it looks like he's ... not interested in collective information-processing? Or that he doesn't actually believe that's a real thing? "Credibly helpful unsolicited criticism should be delivered in private," he writes! I agree that the positive purpose of public criticism isn't solely to help the author. (If it were, there would be no reason for anyone but the author to read it.) But readers _do_ benefit from insightful critical commentary. (If they didn't, why would they read the comments section?) When I read a story and am interested in reading the comments _about_ a story, it's because I'm interested in the thoughts of other readers, who might have picked up subtleties I missed. I don't want other people to self-censor comments on any plot holes or [Fridge Logic](https://tvtropes.org/pmwiki/pmwiki.php/Main/FridgeLogic) they noticed for fear of dampening someone else's enjoyment or hurting the author's feelings.
Yudkowsky claims that criticism should be given in private because then the target "may find it much more credible that you meant only to help them, and weren't trying to gain status by pushing them down in public." I'll buy this as a reason why credibly _altruistic_ unsolicited criticism should be delivered in private.[^altruistic-criticism] Indeed, meaning _only_ to help the target just doesn't seem like a plausible critic motivation in most cases. But the fact that critics typically have non-altruistic motives, doesn't mean criticism isn't helpful. In order to incentivize good criticism, you _want_ people to be rewarded with status for making good criticisms. You'd have to be some sort of communist to disagree with this![^communism-analogy]
Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is because probabilistic reasoning is broadly useful: epistemology can be derived from instrumental concerns. He agreed that severe wireheading issues potentially arise if you allow consequentialist concerns to affect your epistemics.
-But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places.
+But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't propagate itself and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places.
-I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me in the sense that almost everyone else in Berkeley was trying to mess with me.)
+I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me in the way that almost everyone else in Berkeley was trying to mess with me.)
### Writer's Block (November 2019)
The reason it _should_ have been safe to write was because it's good to explain things. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to tell the true story about why I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_."
-So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was the nuclear option. If you're slowly but surely gaining territory in a conventional war, suddenly escalating to nukes seems pointlessly destructive. This metaphor was horribly non-normative ([arguing is not a punishment](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html); carefully telling a true story _about_ an argument is not a nuke), but I didn't know how to make it stably go away.
+So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was the nuclear option. If you're slowly but surely gaining territory in a conventional war, suddenly escalating to nukes would be pointlessly destructive. This metaphor was horribly non-normative ([arguing is not a punishment](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html); carefully telling a true story _about_ an argument is not a nuke), but I didn't know how to make it stably go away.
A more motivationally-stable compromise would be to split off whatever generalizable insights that would have been part of the story into their own posts. ["Heads I Win, Tails?—Never Heard of Her"](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) had been a huge success as far as I was concerned, and I could do more of that kind of thing, analyzing the social stuff without making it personal, even if, secretly ("secretly"), it was personal.
-Ben replied that it didn't seem like it was clear to me that I was a victim of systemic abuse, and that I was trying to figure out whether I was being fair to my abuser. He thought if I could internalize that, I would be able to forgive myself a lot of messiness, which would reduce the perceived complexity of the problem.
+Ben replied that it didn't seem like it was clear to me that I was a victim of systemic abuse, and that I was trying to figure out whether I was being fair to my abusers. He thought if I could internalize that, I would be able to forgive myself a lot of messiness, which would make the problem less daunting.
-I said I would bite that bullet: yes! Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a lame excuse.
+I said I would bite that bullet: Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a lame excuse.
This seemed correlated with the recurring stalemated disagreement within our posse, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's perspective that this usage of "fraud" was [motte-and-baileying](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) between different senses of the word. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.[^ftx]) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge.
On 12 and 13 November 2019, Ziz [published](https://archive.ph/GQOeg) [several](https://archive.ph/6HsvS) [blog](https://archive.ph/jChxP) [posts](https://archive.ph/TPei9) laying out [her](/2019/Oct/self-identity-is-a-schelling-point/) grievances against MIRI and CfAR. On the fifteenth, Ziz and three collaborators staged a protest at the CfAR reunion being held at a retreat center in the North Bay near Camp Meeker. A call to the police falsely alleged that the protesters had a gun, [resulting in a](http://web.archive.org/web/20230316210946/https://www.pressdemocrat.com/article/news/deputies-working-to-identify-suspects-in-camp-meeker-incident/) [dramatic police reaction](http://web.archive.org/web/20201112041007/https://www.pressdemocrat.com/article/news/authorities-id-four-arrested-in-westminster-woods-protest/) (SWAT team called, highway closure, children's group a mile away being evacuated—the works).
-I was tempted to email links to the blog posts to the Santa Rosa _Press-Democrat_ reporter covering the incident (as part of my information-sharing-is-good virtue ethics), but decided to refrain because I predicted that Anna would prefer I didn't.
+I was tempted to email links to Ziz's blog posts to the Santa Rosa _Press-Democrat_ reporter covering the incident (as part of my information-sharing-is-good virtue ethics), but decided to refrain because I predicted that Anna would prefer I didn't.
The main relevance of this incident to my Whole Dumb Story is that Ziz's memoir–manifesto posts included [a 5500 word section about me](https://archive.ph/jChxP#selection-1325.0-1325.4). Ziz portrays me as a slave to social reality, throwing trans women under the bus to appease the forces of cissexism. I don't think that's what's going on with me, but I can see why the theory was appealing.
--------
-On 12 December 2019 I had an interesting interaction with [Somni](https://somnilogical.tumblr.com/), one of the "Meeker Four"—presumably out on bail at this time?—on Discord.
+On 12 December 2019 I had an interesting exchange with [Somni](https://somnilogical.tumblr.com/), one of the "Meeker Four"—presumably out on bail at this time?—on Discord.
I told her it was surprising that she spent so much time complaining about CfAR, Anna Salamon, Kelsey Piper, _&c._, but _I_ seemed to get along fine with her—because naïvely, one would think that my views were so much worse. Was I getting a pity pass because she thought false consciousness was causing me to act against my own transfem class interests? Or what?
In order to be absolutely clear about my terrible views, I said that I was privately modeling a lot of transmisogyny complaints as something like—a certain neurotype-cluster of non-dominant male is latching onto locally ascendant social-justice ideology in which claims to victimhood can be leveraged into claims to power. Traditionally, men are moral agents, but not patients; women are moral patients, but not agents. If weird non-dominant men aren't respected if identified as such (because low-ranking males aren't valuable allies, and don't have the intrinsic moral patiency of women), but _can_ get victimhood/moral-patiency points for identifying as oppressed transfems, that creates an incentive gradient for them to do so. No one was allowed to notice this except me, because everybody [who's anybody](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) prefers to stay on the good side of social-justice ideology unless they have Something to Protect that requires defying it.
-Somni said we got along because I was being victimized by the same forces of gaslighting and wasn't lying about my agenda. Maybe she _should_ be complaining about me?—but I seemed to be following a somewhat earnest epistemic process, whereas Kelsey, Scott, and Anna were not. If I were to start going, "Here's my rationality org; rule #1: no transfems (except me); rule #2, no telling people about rule #1", then she would talk about it.
+Somni said we got along because I was being victimized by the same forces of gaslighting as her and wasn't lying about my agenda. Maybe she _should_ be complaining about me?—but I seemed to be following a somewhat earnest epistemic process, whereas Kelsey, Scott, and Anna were not. If I were to start going, "Here's my rationality org; rule #1: no transfems (except me); rule #2, no telling people about rule #1", then she would talk about it.
I would later remark to Anna that Somni and Ziz saw themselves as being oppressed by people's hypocritical and manipulative social perceptions and behavior. Merely using the appropriate language ("Somni ... she", _&c._) protected her against threats from the Political Correctness police, but it actually didn't protect against threats from the Zizians. The mere fact that I wasn't optimizing for PR (lying about my agenda, as Somni said) was what made me not a direct enemy (although still a collaborator) in their eyes.
I had a pretty productive blogging spree in December 2019. In addition to a number of [more](/2019/Dec/political-science-epigrams/) [minor](/2019/Dec/the-strategy-of-stigmatization/) [posts](/2019/Dec/i-want-to-be-the-one/) [on](/2019/Dec/promises-i-can-keep/) [this](/2019/Dec/comp/) [blog](/2019/Dec/more-schelling/) [and](https://www.lesswrong.com/posts/XbXJZjwinkoQXu4db/funk-tunul-s-legacy-or-the-legend-of-the-extortion-war) [on](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) _[Less](https://www.lesswrong.com/posts/tCwresAuSvk867rzH/speaking-truth-to-power-is-a-schelling-point) [Wrong](https://www.lesswrong.com/posts/jrLkMFd88b4FRMwC6/don-t-double-crux-with-suicide-rock)_, I also got out some more significant posts bearing on my agenda.
-On this blog, in ["Reply to Ozymandias on Fully Consensual Gender"](/2019/Dec/reply-to-ozymandias-on-fully-consensual-gender/), I finally got out at least a partial reply to [Ozy's June 2018 reply](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/) to ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), affirming the relevance of an analogy Ozy had made between the socially-constructed natures of money and social gender, while denying that the analogy supported gender by self-identification. (I had been [working on a more exhaustive reply](/2018/Dec/untitled-metablogging-26-december-2018/#reply-to-ozy), but hadn't managed to finish whittling it into a shape that I was totally happy with.)
+On this blog, in ["Reply to Ozymandias on Fully Consensual Gender"](/2019/Dec/reply-to-ozymandias-on-fully-consensual-gender/), I finally got out at least a partial reply to [Ozy Brennan's June 2018 reply](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/) to ["The Categories Were Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/), affirming the relevance of an analogy Ozy had made between the socially-constructed natures of money and social gender, while denying that the analogy supported gender by self-identification. (I had been [working on a more exhaustive reply](/2018/Dec/untitled-metablogging-26-december-2018/#reply-to-ozy), but hadn't managed to finish whittling it into a shape that I was totally happy with.)
I also polished and pulled the trigger on ["On the Argumentative Form 'Super-Proton Things Tend to Come In Varieties'"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/), my reply to Yudkowsky's implicit political concession to me back in March. I had been reluctant to post it based on an intuition of, "My childhood hero was trying to _do me a favor_; it would be a betrayal to reject the gift." The post itself explained why that intuition was crazy, but _that_ just brought up more anxieties about whether the explanation constituted leaking information from private conversations—but I had chosen my words carefully such that it wasn't. ("Even if Yudkowsky doesn't know you exist [...] he's _effectively_ doing your cause a favor" was something I could have plausibly written in the possible world where the antecedent was true.) Jessica said the post seemed good.
-On _Less Wrong_, the mods had just announced [a new end-of-year Review event](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review), in which the best post from the year before would be reviewed and voted on, to see which had stood the test of time and deserved to be part of our canon of cumulative knowledge. (That is, this Review period starting in late 2019 would cover posts published in _2018_.)
+On _Less Wrong_, the mods had just announced [a new end-of-year Review event](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review), in which the best posts from the year before would be reviewed and voted on, to see which had stood the test of time and deserved to be part of our canon of cumulative knowledge. (That is, this Review period starting in late 2019 would cover posts published in _2018_.)
This provided me with [an affordance](https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE) to write some posts critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to ["Decoupling _vs._ Contextualizing Norms"](https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms) (which had been [cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/wejvnw6QnWrvbjgns)), I wrote ["Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary"](https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling), appealing to our [academically standard theory of how context affects meaning](https://plato.stanford.edu/entries/implicature/) to explain why "decoupling _vs._ contextualizing norms" is a false dichotomy.
[^not-lying-title]: The ungainly title was softened from an earlier draft following feedback from the posse; I had originally written "... Surprisingly Useless".
-I thought this one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. The "hill of meaning in defense of validity" affair had been driven by Yudkowsky's obsession with not technically lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if _that_ were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't _lying_), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if _that_ were the matter my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't _lied_). But his Sequences had [articulated a higher standard](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument) than merely not-lying. If he didn't remember, I could at least hope to remind everyone else.
+I thought that one cut to the heart of the shocking behavior that we had seen from Yudkowsky lately. The "hill of meaning in defense of validity" affair had been driven by Yudkowsky's obsession with not technically lying, on two levels: he had proclaimed that asking for new pronouns "Is. Not. Lying." (as if _that_ were the matter that anyone cared about—as if conservatives and gender-critical feminists should just pack up and go home after it had been demonstrated that trans people aren't _lying_), and he had seen no interest in clarifying his position on the philosophy of language, because he wasn't lying when he said that preferred pronouns weren't lies (as if _that_ were the matter my posse cared about—as if I should keep honoring him as my caliph after it had been demonstrated that he hadn't _lied_). But his Sequences had [articulated a higher standard](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument) than merely not-lying. If he didn't remember, I could at least hope to remind everyone else.
I also wrote a little post, ["Free Speech and Triskadekaphobic Calculators"](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to), arguing that it should be easier to have a rationality/alignment community that just does systematically correct reasoning than a politically savvy community that does systematically correct reasoning except when that would taint AI safety with political drama, analogous to how it's easier to build a calculator that just does correct arithmetic, than a calculator that does correct arithmetic except that it never displays the result 13. In order to build a "[triskadekaphobic](https://en.wikipedia.org/wiki/Triskaidekaphobia) calculator", you would need to "solve arithmetic" anyway, and the resulting product would be limited not only in its ability to correctly compute `6 + 7` but also the infinite family of calculations that include 13 as an intermediate result: if you can't count on `(6 + 7) + 1` being the same as `6 + (7 + 1)`, you lose the associativity of addition.
### A Newtonmas Party (December 2019)
-On 20 December 2019, Scott Alexander messaged me on Discord—that I shouldn't answer if it would be unpleasant, but that he was thinking about asking about autogynephilia on the next _Slate Star Codex_ survey, and wanted to know if I had any suggestions about question design, or if I could suggest any "intelligent and friendly opponents" to consult. After reassuring him that he shouldn't worry about answering being unpleasant ("I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!"), I referred him to my friend [Tailcalled](https://surveyanon.wordpress.com/), who had a lot of experience conducting surveys and ran a "Hobbyist Sexologists" Discord server, which seemed likely to have some friendly opponents.
+On 20 December 2019, Scott Alexander messaged me on Discord—that I shouldn't answer if it would be unpleasant, but that he was thinking of asking about autogynephilia on the next _Slate Star Codex_ survey, and wanted to know if I had any suggestions about question design, or if I could suggest any "intelligent and friendly opponents" to consult. After reassuring him that he shouldn't worry about answering being unpleasant ("I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!"), I referred him to my friend [Tailcalled](https://surveyanon.wordpress.com/), who had a lot of experience conducting surveys and ran a "Hobbyist Sexologists" Discord server, which seemed likely to have some friendly opponents.
-The next day (I assume while I happened to be on his mind), Scott also [commented on](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/LJp2PYh3XvmoCgS6E) "Maybe Lying Doesn't Exist", my post from back in October replying to his "Against Lie Inflation."
+The next day (I assume while I still happened to be on his mind), Scott also [commented on](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/LJp2PYh3XvmoCgS6E) "Maybe Lying Doesn't Exist", my post from back in October replying to his "Against Lie Inflation."
I was frustrated with his reply, which I felt was not taking into account points that I had already covered in detail. A few days later, on the twenty-fourth, I [succumbed to](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/xEan6oCQFDzWKApt7) [the temptation](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/wFRtLj2e7epEjhWDH) [to blow up at him](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/8DKi7eAuMt7PBYcwF) in the comments.
> oh, I guess we're Jewish
> that attenuates the "is a hugely inappropriately socially-aggressive blog comment going to ruin someone's Christmas" fear somewhat
-Scott messaged back at 11:08 the next morning, Christmas Day. He explained that the thought process behind his comment was that he still wasn't sure where we disagreed, and didn't know how to proceed except to dump his understanding of the philosophy (which would include things I already knew) and hope that I could point to the step I didn't like. He didn't know how to convince me of his sincerity and rebut my accusations of him motivatedly playing dumb (which he was inclined to attribute to the malign influence of Michael Vassar's gang).
+Scott messaged back at 11:08 the next morning, Christmas Day. He explained that the thought process behind his comment was that he still wasn't sure where we disagreed and didn't know how to proceed except to dump his understanding of the philosophy (which would include things I already knew) and hope that I could point to the step I didn't like. He didn't know how to convince me of his sincerity and rebut my accusations of him motivatedly playing dumb (which he was inclined to attribute to the malign influence of Michael Vassar's gang).
I explained that the reason for those accusations was that I _knew_ he knew about strategic equivocation, because he taught everyone else about it (as in his famous posts about [the motte-and-bailey doctrine](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) and [the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world)). And so when he acted like he didn't get it when I pointed out that this also applied to "trans women are women", that just seemed _implausible_.
I was skeptical of the claim that no one was "really" being kept ignorant. If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can still think, internally, and express yourself as best you can in Newspeak. But a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs doesn't seem like a culture that could solve AI alignment.
-I linked to Zvi Mowshowitz's post about how [the claim that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) gets used to silence people trying to point out the thing: "'Everybody knows' our kind of trans women are sampled from (part of) the male multivariate trait distribution rather than the female multivariate trait distribution, why are you being a jerk and pointing this out?" But I didn't think that everyone knew.[^survey-whether-everyone-knows] I thought the people who sort-of knew were being intimidated into doublethinking around it.
+I linked to Zvi Mowshowitz's post about how [the claim that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) gets used to silence people trying to point out the thing: in this case, basically, "'Everybody knows' our kind of trans women are sampled from (part of) the male multivariate trait distribution rather than the female multivariate trait distribution, why are you being a jerk and pointing this out?" But I didn't think that everyone knew.[^survey-whether-everyone-knows] I thought the people who sort-of knew were being intimidated into doublethinking around it.
[^survey-whether-everyone-knows]: On this point, it may be instructive to note that a 2023 survey [found that only 60% of the UK public knew that "trans women" were born male](https://www.telegraph.co.uk/news/2023/08/06/third-of-britons-dont-know-trans-women-born-male/).
-At this point, it was almost 2 _p.m._ (the paragraphs above summarize a larger volume of typing), and Scott mentioned that he wanted to go to the Event Horizon Christmas party, and asked if I wanted to come and continue the discussion there. I assented, and thanked him for his time; it would be really exciting if we could avoid a rationalist civil war.
+At this point, it was almost 2 _p.m._ (the paragraphs above summarizing a larger volume of typing), and Scott mentioned that he wanted to go to the Event Horizon Christmas party, and asked if I wanted to come and continue the discussion there. I assented, and thanked him for his time; it would be really exciting if we could avoid a rationalist civil war.
When I arrived at the party, people were doing a reading of [the "Hero Licensing" dialogue epilogue](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing) to [_Inadequate Equilibria_](https://equilibriabook.com/toc/), with Yudkowsky himself playing the Mysterious Stranger. At some point, Scott and I retreated upstairs to continue our discussion. By the end of it, I was feeling more assured of Scott's sincerity, if not his competence. Scott said he would edit in a disclaimer note at the end of "... Not Man for the Categories".
-It would have been interesting if I also got the chance to talk to Yudkowsky for a few minutes, but if I did, I wouldn't be allowed to recount any details of that here due to the privacy norms I'm following.
+It would have been interesting if I also got the chance to talk to Yudkowsky for a few minutes, but if I did, I wouldn't be allowed to recount any details of that here due to the privacy rules I'm following.
The rest of the party was nice. People were reading funny GPT-2 quotes from their phones. At one point, conversation happened to zag in a way that let me show off the probability fact I had learned during Math and Wellness Month. A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context.
[^no-scare-quotes]: Enough to not even scare-quote the term here.
-It was around this time that someone told me that I wasn't adequately taking into account that Yudkowsky was "playing on a different chessboard" than me. (A public figure focused on reducing existential risk from artificial general intelligence is going to sense different trade-offs around Kolmogorov complicity strategies, than an ordinary programmer or mere worm focused on _things that don't matter_.) No doubt. But at the same time, I thought Yudkowsky wasn't adequately taking into account the extent to which some of his longtime supporters (like Michael or Jessica) were, or had been, counting on him to uphold certain standards of discourse (rather than chess)?
+It was around this time that someone told me that I wasn't adequately taking into account that Yudkowsky was "playing on a different chessboard" than me. (A public figure focused on reducing existential risk from artificial general intelligence is going to sense different trade-offs around Kolmogorov complicity strategies than an ordinary programmer or mere worm focused on _things that don't matter_.) No doubt. But at the same time, I thought Yudkowsky wasn't adequately taking into account the extent to which some of his longtime supporters (like Michael or Jessica) were, or had been, counting on him to uphold certain standards of discourse (rather than chess)?
Another effect of my feeling better after the party was that my motivation to keep working on my memoir of the Category War vanished—as if I was still putting weight on a [zero-sum frame](https://unstableontology.com/2019/09/10/truth-telling-is-aggression-in-zero-sum-frames/) in which the memoir was a nuke that I only wanted to use as an absolute last resort.
On 10 February 2020, Scott Alexander published ["Autogenderphilia Is Common and Not Especially Related to Transgender"](https://slatestarcodex.com/2020/02/10/autogenderphilia-is-common-and-not-especially-related-to-transgender/), an analysis of the results of the autogynephilia/autoandrophilia questions on the recent _Slate Star Codex_ survey. Based on eyeballing the survey data, Alexander proposed "if you identify as a gender, and you're attracted to that gender, it's a natural leap to be attracted to yourself being that gender" as a "very boring" theory.
-I appreciated the endeavor of getting real data, but I was unimpressed with Alexander's analysis for reasons that I found difficult to write up in a timely manner; I've only just recently gotten around to [polishing my draft and throwing it up as a standalone post](/2023/Dec/reply-to-scott-alexander-on-autogenderphilia/). Briefly, I can see how it looks like a natural leap if you're verbally reasoning about "gender", but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is not boring and takes on a big complexity penalty, because that group is heterogenous with respect to the underlying mechanisms of sexuality. I already don't have much use for "if you are a sex, and you're attracted to that sex" as a category of analytical interest, because I think gay men and lesbians are different things that need to be studied separately. Given that, "if you identify as a gender, and you're attracted to that gender" (with respect to "gender", not sex) comes off even worse: it's grouping together lesbians, and gay men, and heterosexual males with a female gender identity, and heterosexual females with a male gender identity. What causal mechanism could that correspond to?
+I appreciated the endeavor of getting real data, but I was unimpressed with Alexander's analysis for reasons that I found difficult to write up in a timely manner; I've only just recently gotten around to [polishing my draft and throwing it up as a standalone post](/2023/Dec/reply-to-scott-alexander-on-autogenderphilia/). Briefly, I can see how it looks like a natural leap if you're verbally reasoning about "gender", but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is not boring and takes on a big complexity penalty, because that group is heterogeneous with respect to the underlying mechanisms of sexuality. I already don't have much use for "if you are a sex, and you're attracted to that sex" as a category of analytical interest, because I think gay men and lesbians are different things that need to be studied separately. Given that, "if you identify as a gender, and you're attracted to that gender" (with respect to "gender", not sex) comes off even worse: it's grouping together lesbians, and gay men, and heterosexual males with a female gender identity, and heterosexual females with a male gender identity. What causal mechanism could that correspond to?
(I do like the [hypernym](https://en.wikipedia.org/wiki/Hyponymy_and_hypernymy) _autogenderphilia_.)
### A Private Document About a Disturbing Hypothesis (early 2020)
-<a id="another-extremely-important-part-of-the-story"></a>There's another extremely important part of the story that would fit around here chronologically, but I again find myself constrained by privacy norms: everyone's common sense of decency (this time, even including my own) screams that it's not my story to tell.
+There's another extremely important part of the story that would fit around here chronologically, but I again find myself constrained by privacy norms: everyone's common sense of decency (this time, even including my own) screams that it's not my story to tell.
Adherence to norms is fundamentally fraught for the same reason AI alignment is. In [rich domains](https://arbital.com/p/rich_domain/), attempts to regulate behavior with explicit constraints face a lot of adversarial pressure from optimizers bumping up against the constraint and finding the [nearest unblocked strategies](https://arbital.greaterwrong.com/p/nearest_unblocked) that circumvent it. The intent of privacy norms is to conceal information. But [_information_ in Shannon's sense](https://en.wikipedia.org/wiki/Information_theory) is about what states of the world can be inferred given the states of communication signals; it's much more expansive than the denotative meaning of a text.
[^narcissistic-delusions]: Reasonable trans people aren't the ones driving [the central tendency of the trans rights movement](/2019/Aug/the-social-construction-of-reality-and-the-sheer-goddamned-pointlessness-of-reason/). When analyzing a wave of medical malpractice on children, I think I'm being literal in attributing causal significance to a political motivation to affirm the narcissistic delusions of (some) guys like me, even though not all guys like me are delusional, and many guys like me are doing fine maintaining a non-guy social identity without spuriously dragging children into it.
-That much was obvious to anyone who's had their Blanchardian enlightenment, and wouldn't have been worth the effort of writing a special private Document about. The disturbing hypothesis that occured to me in early 2020 was that, in the culture of the current year, affirmation of a cross-sex identity might happen to kids who weren't HSTS-taxon at all.
+That much was obvious to anyone who's had their Blanchardian enlightenment, and wouldn't have been worth the effort of writing a special private Document about. The disturbing hypothesis that occurred to me in early 2020 was that, in the culture of the current year, affirmation of a cross-sex identity might happen to kids who weren't HSTS-taxon at all.
Very small children who are just learning what words mean say a lot of things that aren't true (I'm a grown-up; I'm a cat; I'm a dragon), and grownups tend to play along in the moment as a fantasy game, but they don't _coordinate to make that the permanent new social reality_.
For another thing, from the skeptical family friend's perspective, it's striking how the family and other grown-ups in the child's life seem to treat the child's statements about gender starkly differently than the child's statements about everything else.
-Suppose that, around the time of the social transition, the child responded to "Hey kiddo, I love you" with, "I'm a girl and I'm a vegetarian." In the skeptic's view, both halves of that sentence were probably generated by the same cognitive algorithm—something like, "practice language and be cute to caregivers, making use of themes from the local cultural environment" (of nice smart liberal grown-ups who talk a lot about gender and animal welfare). In the skeptic's view, if you're not going to change the kid's diet on the basis of the second part, you shouldn't social transition the kid on the basis of the first part.
+Imagine that, around the time of the social transition, the child responded to "Hey kiddo, I love you" with, "I'm a girl and I'm a vegetarian." In the skeptic's view, both halves of that sentence were probably generated by the same cognitive algorithm—something like, "practice language and be cute to caregivers, making use of themes from the local cultural environment" (of nice smart liberal grown-ups who talk a lot about gender and animal welfare). In the skeptic's view, if you're not going to change the kid's diet on the basis of the second part, you shouldn't social transition the kid on the basis of the first part.
-Perhaps even more striking is the way that the grown-ups seem to interpret the child's conflicting or ambiguous statements about gender. Suppose that, around the time social transition was being considered, a parent asked the child whether the child would prefer to be addressed as "my son" or "my daughter."
+Perhaps even more striking is the way that the grown-ups seem to interpret the child's conflicting or ambiguous statements about gender. Imagine that, around the time social transition was being considered, a parent asked the child whether the child would prefer to be addressed as "my son" or "my daughter."
-The child replied, "My son. Or you can call me she. Everyone should call me she or her or my son."
+Suppose the child replied, "My son. Or you can call me she. Everyone should call me she or her or my son."
The grown-ups seem to mostly interpret exchanges like this as indicating that while the child is trans, she's confused about the gender of the words "son" and "daughter". They don't seem to pay much attention to the competing hypothesis that the child knows he's his parents "son", but is confused about the implications of she/her pronouns.
It's not hard to imagine how differential treatment by grown-ups of gender-related utterances could unintentionally shape outcomes. This may be clearer if we imagine a non-gender case. Suppose the child's father's name is John Smith, and that after a grown-up explains ["Sr."/"Jr." generational suffixes](https://en.wikipedia.org/wiki/Suffix_(name)#Generational_titles) after it happened to come up in fiction, the child declares that his name is John Smith, Jr. now. Caregivers are likely to treat this as just a cute thing that the kid said, quickly forgotten by all. But if caregivers feared causing psychological harm by denying a declared name change, one could imagine them taking the child's statement as a prompt to ask followup questions. ("Oh, would you like me to call you _John_ or _John Jr._, or just _Junior_?") With enough followup, it seems plausible that a name change to "John Jr." would meet with the child's assent and "stick" socially. The initial suggestion would have come from the child, but most of the [optimization](https://www.lesswrong.com/posts/D7EcMhL26zFNbJ3ED/optimization)—the selection that this particular statement should be taken literally and reinforced as a social identity, while others are just treated as a cute but not overly meaningful thing the kid said—would have come from the adults.
-Finally, there is the matter of the child's behavior and personality. Suppose that, around the same time that the child's social transition was going down, the father reported the child being captivated by seeing a forklift at Costco. A few months later, another family friend remarked that maybe the child is very competitive, and that "she likes fighting so much because it's the main thing she knows of that you can _win_."
+Finally, there is the matter of the child's behavior and personality. Suppose that, around the same time that the child's social transition was going down, a parent reported the child being captivated by seeing a forklift at Costco. A few months later, another family friend remarked that maybe the child is very competitive, and that "she likes fighting so much because it's the main thing she knows of that you can _win_."
-I think people who are familiar with the relevant scientific literature or come from an older generation would look at observations like these and say, Well, yes, he's a boy; boys like vehicles (_d_ ≈ 2.44!) and boys like fighting. Some of them might suggest that these observations should be counterindicators for transition—that the cross-gender verbal self-reports are less decision-relevant than the fact of a male child behaving in male-typical ways, but nice smart liberal grown-ups in the current year don't think that way.
+I think people who are familiar with the relevant scientific literature or come from an older generation would look at observations like these and say, Well, yes, he's a boy; boys like vehicles (_d_ ≈ 2.44!) and boys like fighting. Some of them might suggest that these observations should be counterindicators for transition—that the cross-gender verbal self-reports are less decision-relevant than the fact of a male child behaving in male-typical ways. But nice smart liberal grown-ups in the current year don't think that way.
One might imagine that the [inferential distance](https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances) between nice smart liberal grown-ups and people from an older generation (or a skeptical family friend) might be crossed by talking about it, but it turns out that talking doesn't help much when people have radically different priors and interpret the same evidence differently.
(When recounting this conversation, the parent adds that rainbows hadn't come up before, and that the child was looking at a rainbow-patterned item at the time of answering.)
-It would seem that the intepretation of this kind of evidence depends on one's prior convictions. If you think that transition is a radical intervention that might pass a cost–benefit analysis for treating rare cases of intractable sex dysphoria, answers like "because girls like specific things like rainbows" are disqualifying. (A fourteen-year-old who could read an informed-consent form would be able to give a more compelling explanation than that, but a three-year-old just isn't ready to make this kind of decision.) Whereas if you think that some children have a gender that doesn't match their assigned sex at birth, you might expect them to express that affinity at age three, without yet having the cognitive or verbal abilities to explain it. Teasing apart where these two views make different predictions seems like it should be possible, but might be beside the point, if the real crux is over [what categories are made for](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/). (Is sex an objective fact that sometimes merits social recognition, or is it better to live in a Society where people are free to choose the gender that suits them?)
+It would seem that the interpretation of this kind of evidence depends on one's prior convictions. If you think that transition is a radical intervention that might pass a cost–benefit analysis for treating rare cases of intractable sex dysphoria, answers like "because girls like specific things like rainbows" are disqualifying. (A fourteen-year-old who could read an informed-consent form would be able to give a more compelling explanation than that, but a three-year-old just isn't ready to make this kind of decision.) Whereas if you think that some children have a gender that doesn't match their assigned sex at birth, you might expect them to express that affinity at age three, without yet having the cognitive or verbal abilities to explain it. Teasing apart where these two views make different predictions seems like it should be possible, but might be beside the point, if the real crux is over [what categories are made for](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/). (Is sex an objective fact that sometimes merits social recognition, or is it better to live in a Society where people are free to choose the gender that suits them?)
Anyway, that's just a hypothesis that occurred to me in early 2020, about something that _could_ happen in the culture of the current year, hypothetically, as far as I know. I'm not a parent and I'm not an expert on child development. And even if the "Clever Hans" etiological pathway I conjectured is real, the extent to which it might apply to any particular case is complex; you could imagine a kid who _was_ "actually trans" whose social transition merely happened earlier than it otherwise would have due to these dynamics.
### Philosophy Blogging Interlude 3! (mid-2020)
-I continued my philosophy of language work, looking into the academic literature on formal models of communication and deception and writing a [couple](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution) [posts](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) encapsulating what I learned from that—and I continued work on my "advanced" philosophy of categorization thesis, the sequel to ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)
+I continued my philosophy of language work, looking into the academic literature on formal models of communication and deception. I wrote a [couple](https://www.lesswrong.com/posts/4hLcbXaqudM9wSeor/philosophy-in-the-darkest-timeline-basics-of-the-evolution) [posts](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) encapsulating what I learned from that—and I continued work on my "advanced" philosophy of categorization thesis, the sequel to ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)
The disclaimer note that Scott Alexander had appended to "... Not Man for the Categories" after our Christmas 2019 discussion had said:
I decided on "Unnatural Categories Are Optimized for Deception" as the title for my advanced categorization thesis. Writing it up was a major undertaking. There were a lot of nuances to address and potential objections to preëmpt, and I felt that I had to cover everything. (A reasonable person who wanted to understand the main ideas wouldn't need so much detail, but I wasn't up against reasonable people who wanted to understand.)
-In September 2020, Yudkowsky Tweeted [something about social media incentives prompting people to make nonsense arguments](https://twitter.com/ESYudkowsky/status/1304824253015945216), and something in me boiled over. The Tweet was fine in isolation, but I rankled at it given the absurdly disproportionate efforts I was undertaking to unwind his incentive-driven nonsense. I left [a snarky, pleading reply](/images/davis-snarky_pleading_reply.png) and [vented on my own timeline](https://twitter.com/zackmdavis/status/1304838346695348224) (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):
+In September 2020, Yudkowsky Tweeted [something about social media incentives prompting people to make nonsense arguments](https://twitter.com/ESYudkowsky/status/1304824253015945216), and something in me boiled over. The Tweets were fine in isolation, but I rankled at it given the absurdly disproportionate efforts I was undertaking to unwind his incentive-driven nonsense. I left [a snarky, pleading reply](/images/davis-snarky_pleading_reply.png) and [vented on my own timeline](https://twitter.com/zackmdavis/status/1304838346695348224) (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):
> Who would have thought getting @ESYudkowsky's robot cult to stop trying to trick me into cutting my dick off (independently of the empirical facts determining whether or not I should cut my dick off) would involve so much math?? OK, I guess the math part isn't surprising, but—[^trying-to-trick-me]
>
> [...] See, I thought you were playing on the chessboard of _being correct about rationality_. Such that, if you accidentally mislead people about your own philosophy of language, you could just ... issue a clarification? I and Michael and Ben and Sarah and ["Riley"] _and Jessica_ wrote to you about this and explained the problem in _painstaking_ detail, **and you stonewalled us.** Why? **Why is this so hard?!**
>
-> [redacted]
+> [...]
>
> No. The thing that's been driving me nuts for twenty-one months is that <strong><em><span style="color: #F00000;">I expected Eliezer Yudkowsky to tell the truth</span></strong></em>. I remain,
>
>
> [...] The sinful and corrupted part wasn't the _initial_ Tweets; the sinful and corrupted part is this **bullshit stonewalling** when your Twitter followers and me and Michael and Ben and Sarah and ["Riley"] and Jessica tried to point out the problem. I've _never_ been arguing against your private universe [...]; the thing I'm arguing against in ["Where to Draw the Boundaries?"](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) (and **my [unfinished draft sequel](https://github.com/zackmdavis/Category_War/blob/cefa98c3abe/unnatural_categories_are_optimized_for_deception.md)**, although that's more focused on what Scott wrote) is the **_actual text_ you _actually published_, not your private universe.**
>
-> [... redacted ...] you could just **publicly clarify your position on the philosophy of language** the way an intellectually-honest person would do if they wanted their followers to have correct beliefs about the philosophy of language?!
+> [...] you could just **publicly clarify your position on the philosophy of language** the way an intellectually-honest person would do if they wanted their followers to have correct beliefs about the philosophy of language?!
>
> You wrote:
>
>
> That's kind of like defining Solomonoff induction, and then saying, "Having said this, we've built AGI." No, you haven't said all the facts! Configuration space is _very high-dimensional_; we don't have _access_ to the individual points. Trying to specify the individual points ("say all the facts") would be like what you wrote about in ["Empty Labels"](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels)—"not just that I can vary the label, but that I can get along just fine without any label at all." Since that's not possible, we need to group points into the space together so that we can use observations from the coordinates that we _have_ observed to make probabilistic inferences about the coordinates we haven't. But there are _mathematical laws_ governing how well different groupings perform, and those laws _are_ a matter of Truth, not a mere policy debate.
>
-> [... redacted ...]
+> [...]
>
> But if behavior at equilibrium isn't deceptive, there's just _no such thing as deception_; I wrote about this on Less Wrong in ["Maybe Lying Can't Exist?!"](https://www.lesswrong.com/posts/YptSN8riyXJjJ8Qp8/maybe-lying-can-t-exist) (drawing on the academic literature about sender–receiver games). I don't think you actually want to bite that bullet?
>
Abram was also right that it would be weird if reflective coherence was somehow impossible: the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'its own' code." In that light, it made sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about—as sacrilegious as that felt to type.
-And yet, somehow, "have accurate beliefs" seemed more fundamental than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the [theorems on the ubiquity of power-seeking](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps) might generalize to a similar conclusion about "accuracy-seeking"? If it didn't, the reason why it didn't might explain why accuracy seems more fundamental.
+And yet, somehow, "have accurate beliefs" seemed more fundamental than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the [theorems on the ubiquity of power-seeking](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps) might generalize to a similar conclusion about "accuracy-seeking"? If it didn't, the reason why it didn't might explain why accuracy seemed more fundamental.
------