MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if you believed that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and mentioned it in a comment: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No).
-Someone [objected that](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=FxSZwECjhgYE7p2du) she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." (I thought I remembered meeting a man with the same last name at the 2016 Summer Solstice event in Berkeley; maybe it was her brother.) I [replied](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=32GPaijsSwX2NSFJi) that that was understandable, but that I hoped it was also understandable that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people.
+Someone [objected that](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/FxSZwECjhgYE7p2du) she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." (I thought I remembered meeting a man with the same last name at the 2016 Summer Solstice event in Berkeley; maybe it was her brother.) I [replied](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/32GPaijsSwX2NSFJi) that that was understandable, but that I hoped it was also understandable that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people.
Such a trainwreck ensued that the mods manually [moved the comments to their own post](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019). Based on the karma scores and what was said,[^yes-requires-slapfight-highlights] I count it as a victory.
-[^yes-requires-slapfight-highlights]: I particularly appreciated Said Achmiz's [defense of disregarding community members' feelings](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=EsSdLMrFcCpSvr3pG), and [Ben's commentary on speech acts that lower the message length of proposals to attack some group](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=TXbgr7goFtSAZEvZb).
+[^yes-requires-slapfight-highlights]: I particularly appreciated Said Achmiz's [defense of disregarding community members' feelings](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/EsSdLMrFcCpSvr3pG), and [Ben's commentary on speech acts that lower the message length of proposals to attack some group](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/TXbgr7goFtSAZEvZb).
-On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or I could call it a blatant lie because no rule of rationality says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, [was persuaded](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."
+On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or I could call it a blatant lie because no rule of rationality says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, [was persuaded](https://www.greaterwrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for/comment/oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory."
But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work s. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software."
------
-I got into a scuffle with Ruby Bloom on his post on ["Causal Reality _vs_. Social Reality"](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality). I wrote [what I thought was a substantive critique](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=X8u8ozpvhwcK4GskA), but Ruby [complained that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=7b2pWiCL33cqhTabg) my tone was too combative, and asked for more charity and collaborative truth-seeking[^collaborative-truth-seeking] in any future comments.
+I got into a scuffle with Ruby Bloom on his post on ["Causal Reality _vs_. Social Reality"](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality). I wrote [what I thought was a substantive critique](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/X8u8ozpvhwcK4GskA), but Ruby [complained that](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/7b2pWiCL33cqhTabg) my tone was too combative, and asked for more charity and collaborative truth-seeking[^collaborative-truth-seeking] in any future comments.
[^collaborative-truth-seeking]: [No one ever seems to be able to explain to me what this phrase means.](https://www.lesswrong.com/posts/uvqd3YiBcrPxXzxQM/what-does-the-word-collaborative-mean-in-the-phrase)
(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the _Less Wrong_ FAQ. Maybe he couldn't let me win again so quickly?)
-I emailed the coordination group about the thread, on the grounds that gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices. Meanwhile on _Less Wrong_, Ruby kept doubling down:
+I emailed the posse about the thread, on the grounds that gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices. Meanwhile on _Less Wrong_, Ruby kept doubling down:
> [I]f the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. [...]
>
"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR _for_?
-[I replied to Ruby that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack!
+[I replied to Ruby that](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack!
-Jessica said that there's no point in getting mad at [MOPs](http://benjaminrosshoffman.com/construction-beacons/). I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was _explicitly_ about causal _vs._ social reality; it's possible that I wouldn't have been such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt.
+Jessica said that there's no point in getting mad at [MOPs](http://benjaminrosshoffman.com/construction-beacons/). I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was explicitly about causal _vs._ social reality; it's possible that I wouldn't have been such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt.
Jessica ended up writing a post, ["Self-Consciousness Wants Everything to Be About Itself"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself), arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings. She used as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community.
-Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in a subsequent apology to me](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post) and by [an exchange between Ray and Ruby that she thought was "surprisingly okay"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself?commentId=EW3Mom9qfoggfBicf).
+Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in a subsequent apology to me](https://www.greaterwrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality/comment/wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post) and by [an exchange between Ray and Ruby that she thought was "surprisingly okay"](https://www.greaterwrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself/comment/EW3Mom9qfoggfBicf).
From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it can help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better.
I still sympathized with the pushback from Caliphate supporters against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a boring semantic argument, but I feared that until we invented better linguistic technology, the boring semantic argument was going to continue sucking up discussion bandwidth with others.
-"Am I being too tone-policey here?" I asked the coordination group. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!")
+"Am I being too tone-policey here?" I asked the posse. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!")
Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?" He argued that investigations of financial fraud focus on false promises about money, rather than the psychological minutiæ of the perp's motives.
Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad. I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors [(most notably Mencius Moldbug)](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#unqualified-reservations) and from talking with "Thomas". I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#less-precise-is-more-violent) when I was talking about criticizing "rationalists".
-Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.lesswrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s?commentId=TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.)
+Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.greaterwrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s/comment/TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.)
I did believe that Yudkowsky believed that neoreaction was not worth engaging with. I would never fault anyone for saying "I vehemently disagree with what little I've read and/or heard of this author." I wasn't accusing him of being insincere.
I said I would bite that bullet: yes! Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a lame excuse.
-This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's perspective that this usage of "fraud" was [motte-and-baileying](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) between different senses of the word. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.[^ftx]) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge.
+This seemed correlated with the recurring stalemated disagreement within our posse, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's perspective that this usage of "fraud" was [motte-and-baileying](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) between different senses of the word. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.[^ftx]) I wanted to do more work to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge.
[^ftx]: Three years later, the FTX cryptocurrency exchange founded by effective altruists as an earning-to-give scheme [turned out to be an enormous fraud](https://en.wikipedia.org/wiki/Bankruptcy_of_FTX) à la Enron and Madoff. I'm inclined to give the posse some amount of epistemic credit for this: the collapse of FTX seems less surprising on Ben and Michael's view of [the influence-seeking tendencies that characterize EA](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), even if an ordinary grown-up would say that the crimes of Sam Bankman-Fried as an individual have no bearing on the EA movement as a whole.
On _Less Wrong_, the mods had just announced [a new end-of-year Review event](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review), in which the best post from the year before would be reviewed and voted on, to see which had stood the test of time and deserved to be part of our canon of cumulative knowledge. (That is, this Review period starting in late 2019 would cover posts published in _2018_.)
-This provided me with [an affordance](https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review?commentId=d4RrEizzH85BdCPhE) to write some posts critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to ["Decoupling _vs._ Contextualizing Norms"](https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms) (which had been [cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/wejvnw6QnWrvbjgns)), I wrote ["Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary"](https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling), appealing to our [academically standard theory of how context affects meaning](https://plato.stanford.edu/entries/implicature/) to explain why "decoupling _vs._ contextualizing norms" is a false dichotomy.
+This provided me with [an affordance](https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE) to write some posts critiquing posts that had been nominated for the Best-of-2018 collection that I didn't think deserved such glory. In response to ["Decoupling _vs._ Contextualizing Norms"](https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms) (which had been [cited in a way that I thought obfuscatory during the "Yes Implies the Possibility of No" trainwreck](https://www.greaterwrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019/comment/wejvnw6QnWrvbjgns)), I wrote ["Relevance Norms; Or, Grecian Implicature Queers the Decoupling/Contextualizing Binary"](https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling), appealing to our [academically standard theory of how context affects meaning](https://plato.stanford.edu/entries/implicature/) to explain why "decoupling _vs._ contextualizing norms" is a false dichotomy.
More significantly, in reaction to Yudkowsky's ["Meta-Honesty: Firming Up Honesty Around Its Edge Cases"](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases), I published ["Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think"](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly),[^not-lying-title] explaining why I thought "Meta-Honesty" was relying on an unproductively narrow sense of "honesty", because the ambiguity of natural language makes it easy to deceive people without technically lying.
On 20 December 2019, Scott Alexander messaged me on Discord—that I shouldn't answer if it would be unpleasant, but that he was thinking about asking about autogynephilia on the next _Slate Star Codex_ survey, and wanted to know if I had any suggestions about question design, or if I could suggest any "intelligent and friendly opponents" to consult. After reassuring him that he shouldn't worry about answering being unpleasant ("I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!"), I referred him to my friend [Tailcalled](https://surveyanon.wordpress.com/), who had a lot of experience conducting surveys and ran a "Hobbyist Sexologists" Discord server, which seemed likely to have some friendly opponents.
-The next day (I assume while I happened to be on his mind), Scott also [commented on](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=LJp2PYh3XvmoCgS6E) "Maybe Lying Doesn't Exist", my post from back in October replying to his "Against Lie Inflation."
+The next day (I assume while I happened to be on his mind), Scott also [commented on](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/LJp2PYh3XvmoCgS6E) "Maybe Lying Doesn't Exist", my post from back in October replying to his "Against Lie Inflation."
-I was frustrated with his reply, which I felt was not taking into account points that I had already covered in detail. A few days later, on the twenty-fourth, I [succumbed to](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=xEan6oCQFDzWKApt7) [the temptation](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=wFRtLj2e7epEjhWDH) [to blow up at him](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=8DKi7eAuMt7PBYcwF) in the comments.
+I was frustrated with his reply, which I felt was not taking into account points that I had already covered in detail. A few days later, on the twenty-fourth, I [succumbed to](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/xEan6oCQFDzWKApt7) [the temptation](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/wFRtLj2e7epEjhWDH) [to blow up at him](https://www.greaterwrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist/comment/8DKi7eAuMt7PBYcwF) in the comments.
After commenting, I noticed what day it was and added a few more messages to our Discord chat—
I decided on "Unnatural Categories Are Optimized for Deception" as the title for my advanced categorization thesis. Writing it up was a major undertaking. There were a lot of nuances to address and potential objections to preëmpt, and I felt that I had to cover everything. (A reasonable person who wanted to understand the main ideas wouldn't need so much detail, but I wasn't up against reasonable people who wanted to understand.)
-In September 2020, Yudkowsky Tweeted [something about social media incentives prompting people to make nonsense arguments](https://twitter.com/ESYudkowsky/status/1304824253015945216), and something in me boiled over. The Tweet was fine in isolation, but I rankled at it given the absurdly disproportionate efforts I was undertaking to unwind his incentive-driven nonsense. I left [a pleading, snarky reply](https://twitter.com/zackmdavis/status/1304838486810193921) and [vented on my own timeline](https://twitter.com/zackmdavis/status/1304838346695348224) (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):
+In September 2020, Yudkowsky Tweeted [something about social media incentives prompting people to make nonsense arguments](https://twitter.com/ESYudkowsky/status/1304824253015945216), and something in me boiled over. The Tweet was fine in isolation, but I rankled at it given the absurdly disproportionate efforts I was undertaking to unwind his incentive-driven nonsense. I left [a snarky, pleading reply](/images/davis-snarky_pleading_reply.png) and [vented on my own timeline](https://twitter.com/zackmdavis/status/1304838346695348224) (with preview images from the draft of "Unnatural Categories Are Optimized for Deception"):
> Who would have thought getting @ESYudkowsky's robot cult to stop trying to trick me into cutting my dick off (independently of the empirical facts determining whether or not I should cut my dick off) would involve so much math?? OK, I guess the math part isn't surprising, but—[^trying-to-trick-me]
There's a dramatic episode that would fit here chronologically if this were an autobiography (which existed to tell my life story), but since this is a topic-focused memoir (which exists because my life happens to contain this Whole Dumb Story which bears on matters of broader interest, even if my life would not otherwise be interesting), I don't want to spend more wordcount than is needed to briefly describe the essentials.
-I was charged by members of the extended "Vassarite" clique with the duty of taking care of a mentally-ill person at my house on 18 December 2020. (We did not trust the ordinary psychiatric system to act in patients' interests.) I apparently did a poor job, and ended up saying something callous on the care team group chat after a stressful night, which led to a chaotic day on the nineteenth, and an ugly falling-out between me and the group. The details aren't particularly of public interest.
+I was charged by members of the extended Michael Vassar–adjacent social circle with the duty of taking care of a mentally-ill person at my house on 18 December 2020. (We did not trust the ordinary psychiatric system to act in patients' interests.) I apparently did a poor job, and ended up saying something callous on the care team group chat after a stressful night, which led to a chaotic day on the nineteenth, and an ugly falling-out between me and the group. The details aren't particularly of public interest.
-My poor performance during this incident [weighs on my conscience](/2020/Dec/liability/) particularly because I had [previously](/2017/Mar/fresh-princess/) [been](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/) in the position of being crazy and benefiting from the help of my friends (including many of the same people involved in this incident) rather than getting sent back to psychiatric prison ("hospital", they call it a "hospital"). Of all people, I had a special debt to "pay it forward", and one might have hoped that I would also have special skills, that having been on the receiving end of a psychiatric tripsitting operation would help me know what to do on the giving end. Neither of those panned out.
+My poor performance during this incident [weighs on my conscience](/2020/Dec/liability/) particularly because I had [previously](/2017/Mar/fresh-princess/) [been](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/) in the position of being crazy and benefiting from the help of my friends (including many of the same people involved in this incident) rather than getting sent back to psychiatric prison ("hospital", they call it a "hospital"). Of all people, I had a special debt to "pay it forward", and one might have hoped that I would also have special skills, that having been on the receiving end of a non-institutional psychiatric tripsitting operation would help me know what to do on the giving end. Neither of those panned out.
Some might appeal to the proverb "All's well that ends well", noting that the person in trouble ended up recovering, and that, while the stress of the incident contributed to a somewhat serious relapse of my own psychological problems on the night of the nineteenth and in the following weeks, I ended up recovering, too. But recovering normal functionality after a traumatic episode doesn't imply a lack of other lasting consequences (to the psyche, to trusting relationships, _&c._). I am therefore inclined to dwell on [another proverb](https://www.alessonislearned.com/), "A lesson is learned but the damage is irreversible."
------
-And really, that should have been the end of the story. At the cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satisfied. If I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war.
+And really, that should have been the end of the story. At the cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. This suggested poor cognitive returns on investment from interacting with the "rationalist" community—if it took that much effort to correct a problem I had noticed myself, I couldn't expect them to help me with problems I couldn't detect—but I didn't think I was entitled to more. If I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war.
It turned out that I would have occasion to continue waging the robot-cult religious civil war. (To be continued.)