From a96a63d065f9b453ba385fe3ac24420da0c4ca26 Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Mon, 13 Nov 2023 15:29:37 -0800 Subject: [PATCH] memoir: applying pt. 3 pro edits --- .../if-clarity-seems-like-death-to-them.md | 150 +++++++++--------- 1 file changed, 76 insertions(+), 74 deletions(-) diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 6f14589..b91b021 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -5,9 +5,9 @@ Category: commentary Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors Status: draft -> "—but if one hundred thousand normies can turn up, to show their support for the rationalist community, why can't you?" +> "—but if one hundred thousand [normies] can turn up, to show their support for the [rationalist] community, why can't you?" > -> I said wearily, "Because every time I hear the word _community_, I know I'm being manipulated. If there is such a thing as _the rationalist community_, I'm certainly not a part of it. As it happens, I don't want to spend my life watching _rationalist and effective altruist_ television channels, using _rationalist and effective altruist_ news systems ... or going to _rationalist and effective altruist_ street parades. It's all so ... proprietary. You'd think there was a multinational corporation who had the franchise rights on truth and goodness. And if you don't _market the product_ their way, you're some kind of second-class, inferior, bootleg, unauthorized nerd." +> I said wearily, "Because every time I hear the word _community_, I know I'm being manipulated. If there is such a thing as _the [rationalist] community_, I'm certainly not a part of it. As it happens, I don't want to spend my life watching [_rationalist and effective altruist_] television channels, using [_rationalist and effective altruist_] news systems ... or going to [_rationalist and effective altruist_] street parades. It's all so ... proprietary. You'd think there was a multinational corporation who had the franchise rights on [truth and goodness]. And if you don't _market the product_ their way, you're some kind of second-class, inferior, bootleg, unauthorized [nerd]." > > —"Cocoon" by Greg Egan (paraphrased)[^egan-paraphrasing] @@ -17,33 +17,35 @@ Recapping our Whole Dumb Story so far: in a previous post, ["Sexual Dimorphism i —none of which gooey private psychological minutiæ would be in the public interest to blog about _except that_, as I explained in a subsequent post, ["Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer"](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/), around 2016, everyone in the community that formed around the Sequences suddenly decided that guys like me might actually be women in some unspecified metaphysical sense, and the cognitive dissonance from having to rebut all this nonsense coming from everyone I used to trust drove me [temporarily](/2017/Mar/fresh-princess/) [insane](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/) from stress and sleep deprivation ... -—which would have been the end of the story, _except that_, as I explained in a subsequent–subsequent post, ["A Hill of Validity in Defense of Meaning"](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/), in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that looked optimized to suggest that people who disputed that men could be women in some unspecified metaphysical sense were philosophically confused. Anyone else being wrong on the internet like that wouldn't have seemed like a big deal, but Scott Alexander had written that [rationalism is the belief that Eliezer Yudkowsky is the rightful caliph](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/). After extensive attempts by me and allies to get him to clarify amounted to nothing, we felt justified in concluding that Yudkowsky and his Caliphate of so-called "rationalists" was corrupt. +—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, ["A Hill of Validity in Defense of Meaning"](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/), in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically if they disputed that men could be women in some unspecified metaphysical sense. + +Anyone else being wrong on the internet like that wouldn't have seemed like a big deal, but Scott Alexander had [semi-jokingly](http://www.catb.org/jargon/html/H/ha-ha-only-serious.html) written that [rationalism is the belief that Eliezer Yudkowsky is the rightful caliph](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/). After extensive attempts by me and allies to get clarification from Yudkowsky amounted to nothing, we felt justified in concluding that he and his Caliphate of so-called "rationalists" was corrupt. Anyway, given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing. -I had been hyperfocused on prosecuting my Category War, but the reason Michael Vassar and Ben Hoffman and Jessica Taylor[^posse-boundary] were willing to help me out on that was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a more general problem of epistemic rot in "the community". +I had been hyperfocused on prosecuting my Category War, but the reason Michael Vassar and Ben Hoffman and Jessica Taylor[^posse-boundary] were willing to help me out was not because they particularly cared about the gender and categories example but because it seemed like a manifestation of a more general problem of epistemic rot in "the community." -[^posse-boundary]: Sarah Constantin and "Riley" had also been involved in reaching out to Yudkowsky, and were included in many subsequent discussions, but seemed like more marginal members of the group that was forming. +[^posse-boundary]: Although Sarah Constantin and "Riley" had also been involved in reaching out to Yudkowsky and were included in many subsequent discussions, they seemed like more marginal members of the group that was forming. Ben had previously worked at GiveWell and had written a lot about problems with the effective altruism (EA) movement; in particular, he argued that EA-branded institutions were making [incoherent](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [decisions](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) under the influence of incentives to [distort](http://benjaminrosshoffman.com/humility-argument-honesty/) [information](http://benjaminrosshoffman.com/honesty-and-perjury/) [in order to](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) [seek](http://benjaminrosshoffman.com/against-neglectedness/) [power](http://benjaminrosshoffman.com/against-responsibility/). Jessica had previously worked at MIRI, where she was unnerved by what she saw as under-evidenced paranoia about information hazards and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam). (As Jack Gallagher, who was also at MIRI at the time, [put it](https://www.greaterwrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards/comment/TcsXh44pB9xRziGgt), "A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work.") -To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? +To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of the same underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? -If there was a real problem, I didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt), rather than having the kind of robust, precise representation a well-designed AI could compute plans with. +If there was a real problem, I didn't have a good grasp on it. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people and get a correction on that specific point. (Although as we had just discovered, that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt). -Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in Vernor Vinge's _A Fire Upon the Deep_. The problem wasn't that people were getting dumber; it was that they increasingly behaving in a way that was better explained by their political incentives rather than as decisions based on coherent beliefs about the world—using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. +Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in Vernor Vinge's _A Fire Upon the Deep_. The problem wasn't that people were getting dumber; it was that they were increasingly behaving in a way that was better explained by their political incentives than by coherent beliefs about the world. They were using and construing facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. -When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate [had privately clarified that](https://twitter.com/jessi_cata/status/1462454555925434375) the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified." +When I asked Ben for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate [had privately clarified that](https://twitter.com/jessi_cata/status/1462454555925434375) the word "excited" wasn't necessarily meant positively—and in this case meant something more like "terrified." -This seemed to me like the sort of thing where a particularly principled (naïve?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works, and things could not possibly be otherwise." +This seemed to me like the sort of thing where a particularly principled (naïve?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works." -I thought explaining the Blight to an ordinary grown-up was going to need either lots of specific examples that were way more egregious than this (and more egregious than the examples in Sarah Constantin's ["EA Has a Lying Problem"](https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html) or Ben's ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world (perhaps a private place) with _unusually high standards_. +I thought explaining the Blight to an ordinary grown-up was going to need either lots of specific examples that were way more egregious than this (and more egregious than the examples in Sarah Constantin's ["EA Has a Lying Problem"](https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html) or Ben's ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world with unusually high standards. -The schism introduced new pressures on my social life. On 20 April 2019, I told Michael that I still wanted to be friends with people on both sides of the factional schism, even though I was on this side. Michael said that we should unambiguously regard Yudkowsky and CfAR president (and my personal friend of ten years) Anna Salamon as criminals or enemy combatants, who could claim no rights in regards to me or him. +The schism introduced new pressures on my social life. On 20 April 2019, I told Michael that I still wanted to be friends with people on both sides of the factional schism. Michael said that we should unambiguously regard Yudkowsky and CfAR president (and my personal friend of ten years) Anna Salamon as criminals or enemy combatants who could claim no rights in regard to me or him. -I don't think I "got" the framing at this time. War metaphors sounded scary and mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soliders on the other side of a war _aren't_ necessarily morally blameworthy as individuals:[^soldiers] their actions are being directed by the Power they're embedded in. +I don't think I got the framing at this time. War metaphors sounded scary and mean: I didn't want to shoot my friends! But the point of the analogy (which Michael explained, but I wasn't ready to hear until I did a few more weeks of emotional processing) was specifically that soldiers on the other side of a war aren't necessarily morally blameworthy as individuals:[^soldiers] their actions are being directed by the Power they're embedded in. [^soldiers]: At least, not blameworthy in the same way as someone who committed the same violence as an individual. @@ -67,13 +69,13 @@ I may have subconsciously pulled off an interesting political maneuver. In my fi I claim that I was meeting this standard: I _was_ willing to personally fix the philosophy-of-categorization issue no matter how long it took, and the issue _did_ arise from outright bad faith. -And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feel less aggrieved.) Was I wrong to interpret this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.) +And as it happened, on 4 May 2019, Yudkowsky [retweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was sort of like the clarification I had been asking for. (Empirically, it made me feel less aggrieved.) Was I wrong to interpret this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.) -Separately, one evening in April, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, [Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. This culminated in a group conversation with the entire house, which I found unsettling. (Unfortunately, I didn't take notes and don't remember the details except that I had a sense of everyone else seeming to agree on things that I thought were clearly contrary to the spirit of the Sequences.) +Separately, one evening in April, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, "[Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're the only ones backing me up on this incredibly basic philosophy thing and I don't feel like I have anywhere else to go." This culminated in a group conversation with the entire house, which I found unsettling. (Unfortunately, I didn't take notes and don't remember the details except that I had a sense of everyone else seeming to agree on things that I thought were clearly contrary to the spirit of the Sequences.) [^named-houses]: It was common practice in our subculture to name group houses. My apartment was "We'll Name It Later." -In an adorable twist, Mike and "Meredith"'s two-year-old son was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael _Vassar_.[^mike-pseudonym] +The two-year-old son of Mike and "Meredith" was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael _Vassar_.[^mike-pseudonym] [^mike-pseudonym]: I'm not giving Mike a pseudonym because his name is needed for this adorable anecdote to make sense, and I'm not otherwise saying sensitive things about him. @@ -83,21 +85,21 @@ These two datapoints led me to a psychological hypothesis: when people see someo ---- -I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" memoir-post. Ben said that the target audience to aim for was sympathetic-but-naïve people like I had been a few years ago, who hadn't yet had the experiences I had. This way, they wouldn't have to freak out to the point of [being imprisoned](/2017/Mar/fresh-princess/) and demand help from community leaders and not get it; they could just learn from me. +I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" memoir-post. Ben said that the target audience to aim for was sympathetic but naïve people like I had been a few years ago, who hadn't yet had the experiences I'd had. This way, they wouldn't have to freak out to the point of [being imprisoned](/2017/Mar/fresh-princess/) and demand help from community leaders and not get it; they could just learn from me. -I didn't know how to continue it. I was too psychologically constrained; I didn't know how to tell the Whole Dumb Story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. +I didn't know how to continue it. I was too psychologically constrained; I didn't know how to tell the Whole Dumb Story without escalating personal conflicts or leaking info from private conversations. -I decided to take a break from the religious civil war [and from this blog](/2019/May/hiatus/), and [declared May 2019 as Math and Wellness Month](http://zackmdavis.net/blog/2019/05/may-is-math-and-wellness-month/). +I decided to take a break from the religious civil war [and from this blog](/2019/May/hiatus/). I [declared May 2019 as Math and Wellness Month](http://zackmdavis.net/blog/2019/05/may-is-math-and-wellness-month/). -My dayjob performance had been suffering terribly for months. The psychology of the workplace is ... subtle. There's a phenomenon where some people are vastly more productive than others and everyone knows it, but no one is cruel enough [to make it _common_ knowledge](https://slatestarcodex.com/2015/10/15/it-was-you-who-made-my-blue-eyes-blue/), which is awkward for people who simultaneously benefit from the culture of common-knowledge-prevention allowing them to collect the status and money rents of being a $150K/yr software engineer without actually [performing at that level](http://zackmdavis.net/blog/2013/12/fortune/), while also having [read enough Ayn Rand as a teenager](/2017/Sep/neither-as-plea-nor-as-despair/) to be ideologically opposed to subsisting on unjustly-acquired rents rather than value creation. The "everyone knows I feel guilty about underperforming, so they don't punish me because I'm already doing enough internalized domination to punish myself" dynamic would be unsustainable if it were to evolve into a loop of "feeling guilt in exchange for not doing work" rather than the intended "feeling guilt in order to successfully incentivize work". I didn't think the company would fire me, but I was worried that they _should_. +My dayjob performance had been suffering for months. The psychology of the workplace is ... subtle. There's a phenomenon where some people are vastly more productive than others and everyone knows it, but no one is cruel enough [to make it common knowledge](https://slatestarcodex.com/2015/10/15/it-was-you-who-made-my-blue-eyes-blue/). This is awkward for people who simultaneously benefit from the culture of common-knowledge-prevention allowing them to collect the status and money [rents](https://en.wikipedia.org/wiki/Economic_rent) of being a $150K/year software engineer without actually [performing at that level](http://zackmdavis.net/blog/2013/12/fortune/), who also [read enough Ayn Rand as a teenager](/2017/Sep/neither-as-plea-nor-as-despair/) to be ideologically opposed to subsisting on unjustly-acquired rents rather than value creation. I didn't think the company would fire me, but I was worried that they _should_. -I asked my boss to temporarily temporarily assign me some easier tasks that I could make steady progress on even while being psychologically impaired from a religious war. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired _anyway_, it was better to be upfront about how I could best serve the company given that impairment, rather than hoping that the boss wouldn't notice. +I asked my boss to temporarily assign me some easier tasks that I could make steady progress on. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired, it was better to be up-front about how I could best serve the company given that impairment, rather than hoping the boss wouldn't notice. -My "intent" to take a break from the religious war didn't take. I met with Anna on the UC Berkeley campus, and read her excerpts from some of Ben's and Jessica's emails. (She had not acquiesced to my request for a comment on "... Boundaries?", including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; I had figured that spamming people with hysterical and somewhat demanding physical postcards was more polite than my usual habit of spamming people with hysterical and somewhat demanding emails.) While we (my posse) were aghast at Yudkowsky's behavior, she was aghast at ours: reaching out to try to have a conversation with Yudkowsky, and then concluding that he was a fraud because we weren't satisfied with the outcome was like hiding soldiers in an ambulance, introducing a threat against him in context where he had a right to be safe. +My intent of a break from the religious war didn't take. I met with Anna on the UC Berkeley campus and read her excerpts from Ben's and Jessica's emails. (She had not provided a comment on "... Boundaries?" despite my requests, including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; spamming people with hysterical and somewhat demanding postcards felt more distinctive than my usual habit of spamming people with hysterical and somewhat demanding emails.) While we (my posse) were aghast at Yudkowsky's behavior, Anna was aghast at ours: reaching out to try to have a conversation with Yudkowsky, then concluding that he was a fraud because we weren't satisfied with the outcome was like hiding soldiers in an ambulance, introducing a threat into a context where he had a right to be safe. -I complained that I had _actually believed_ our own [marketing](https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-and-create-the-art) [material](https://www.lesswrong.com/posts/jP583FwKepjiWbeoQ/epistle-to-the-new-york-less-wrongians) about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about what "the thing" was, was never actually part of the original vision. She kept repeating that she had tried to warn me that public reason didn't work, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture, and then don't say them, in order to avoid getting drawn into pointless conflicts.) +I complained that I had believed our own [marketing](https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-and-create-the-art) [material](https://www.lesswrong.com/posts/jP583FwKepjiWbeoQ/epistle-to-the-new-york-less-wrongians) about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about "the thing" was never actually part of the original vision. She kept repeating that she had tried to warn me that public reason didn't work, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture and then not say them, in order to avoid getting drawn into pointless conflicts.) -It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the actually-smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) in our own tiny little bubble of the world? +It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) in our own tiny little bubble of the world? My frustration bubbled out into follow-up emails: @@ -107,35 +109,35 @@ I added: > Can you please _acknowledge that I didn't just make this up?_ Happy to pay you $200 for a reply to this email within the next 72 hours -Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) offers from me anymore; previously, she had regarded my occasional custom of recklessly throwing money at friends to get what I wanted as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)), that it shouldn't be surprising if people steered clear of controversy. +Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) offers from me anymore; previously, she had regarded my occasional custom of recklessly throwing money at friends to get what I wanted as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)) and it shouldn't be surprising if people steered clear of controversy. -I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether or not I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake even after it was pointed out, then the "rationalists" were _worse than useless_ to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of [a reference class that included AI researchers](/2017/Jan/from-what-ive-tasted-of-desire/)). +I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't correct the mistake even after it was pointed out, then the "rationalists" were worse than useless to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I were part of [a reference class that included AI researchers](/2017/Jan/from-what-ive-tasted-of-desire/)). -Fundamentally, I was skeptical that you _could_ do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). +Fundamentally, I was skeptical that you _could_ do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna was unusually skillful at thinking things without saying them; I thought people facing similar speech restrictions generally just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). -[^plausibly]: I was still deep enough in my hero-worship that I wrote "plausibly". Today, I would not consider the adverb necessary. +[^plausibly]: I was still deep enough in my hero worship that I wrote "plausibly" in an email at the time. Today, I would not consider the adverb necessary. -Despite Math and Wellness Month and my "intent" to take a break from the religious civil war, I kept reading _Less Wrong_ during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness). +Despite Math and Wellness Month and my intent to take a break from the religious civil war, I kept reading _Less Wrong_ during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness). -MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if believed that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and mentioned it in a comment: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No). +MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if you believed that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and mentioned it in a comment: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No). Someone [objected that](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=FxSZwECjhgYE7p2du) she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." (I thought I remembered meeting a man with the same last name at the 2016 Summer Solstice event in Berkeley; maybe it was her brother.) I [replied](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=32GPaijsSwX2NSFJi) that that was understandable, but that I hoped it was also understandable that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people. -The ensuring trainwreck got so bad that the mods manually [moved the comments to their own post](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019). Based on the karma scores and what was said,[^yes-requires-slapfight-highlights] I count it as a "victory" for me. +Such a trainwreck ensued that the mods manually [moved the comments to their own post](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019). Based on the karma scores and what was said,[^yes-requires-slapfight-highlights] I count it as a victory. [^yes-requires-slapfight-highlights]: I particularly appreciated Said Achmiz's [defense of disregarding community members' feelings](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=EsSdLMrFcCpSvr3pG), and [Ben's commentary on speech acts that lower the message length of proposals to attack some group](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=TXbgr7goFtSAZEvZb). -On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, [was persuaded](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." +On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or I could call it a blatant lie because no rule of rationality says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, [was persuaded](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." -But winning "victories" wasn't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" onto my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what one's real work was. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software." +But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work s. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software." -I said, I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a goddamned _MIRI research associate_? Not to demonize that commenter, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could beat it out of me. +I said I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a goddamned _MIRI research associate_? Not to demonize that commenter, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could beat it out of me. Steven objected that tractability and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's gravitational field directly impedes NASA's mission, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort trying to reduce the Earth's gravity (_viz._, zero). -I agreed that tractability needs to be addressed, but the situation felt analogous to being in a coal mine in which my favorite one of our canaries had just died. Caliphate officials (Yudkowsky, Alexander, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, which was the direct cause of why I-in-particular was freaking out, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue? +I agreed that tractability needs to be addressed, but the situation felt analogous to being in [a coal mine in which my favorite of our canaries had just died](https://en.wikipedia.org/wiki/Sentinel_species). Caliphate officials (Yudkowsky, Alexander, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird. It's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue? -Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to deeply relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. +Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/) and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to be deeply relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. In June 2019, I made [a linkpost on _Less Wrong_](https://www.lesswrong.com/posts/5nH5Qtax9ae8CQjZ9/tal-yarkoni-no-it-s-not-the-incentives-it-s-you) to Tal Yarkoni's ["No, It's Not The Incentives—It's you"](https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/), about how professional scientists should stop using career incentives as an excuse for doing poor science. It generated a lot of discussion. @@ -145,13 +147,13 @@ In an email (Subject: "LessWrong.com is dead to me"), Jessica identified _Less W > > Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a _lot_ of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it). People who think being exposed as fraudulent (or having their friends exposed as fraudulent) is a terrible outcome, are going to actively resist high-integrity discussion norms. -Posting on _Less Wrong_ made sense as harm-reduction, but the only way to get people to stick up for truth would be to convert them to a whole new worldview, which would require a lot of in-person discussions. She bought up the idea of starting a new forum to replace _Less Wrong_. +Posting on _Less Wrong_ made sense as harm-reduction, but the only way to get people to stick up for truth would be to convert them to a whole new worldview, which would require a lot of in-person discussions. She brought up the idea of starting a new forum to replace _Less Wrong_. -Ben said that trying to discuss with the _Less Wrong_ mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." The premise should be, "If this is within the Overton window for _Less Wrong_ moderators, there's a serious confusion on the conditions required for discourse", not on scapegoating individuals. He was less optimistic about harm-reduction; participating on the site was implicitly endorsing it by submitting the rule of the karma and curation systems. +Ben said that trying to discuss with the _Less Wrong_ mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." The premise should be, "If this is within the Overton window for _Less Wrong_ moderators, there's a serious confusion on the conditions required for discourse"—scapegoating individuals wasn't part of it. He was less optimistic about harm reduction; participating on the site was implicitly endorsing it by submitting to the rule of the karma and curation systems. -"Riley" expressed sadness about how the discussion on "The Incentives" demonstrated that the community they loved—including dear friends—was in a very bad way. Michael (in a separate private discussion) had said he was glad to hear about the belief-update. "Riley" said that Michael saying that also made them sad, because it seemed discordant to be happy about sad news. Michael wrote (in the thread): +"Riley" expressed sadness about how the discussion on "The Incentives" demonstrated that the community they loved—including dear friends—was in a bad way. Michael (in a separate private discussion) had said he was glad to hear about the belief-update. "Riley" said that Michael saying that also made them sad, because it seemed discordant to be happy about sad news. Michael wrote (in the thread): -> I['m] sorry it made you sad. From my perspective, the question is no[t] "can we still be friends with such people", but "how can we still be friends with such people" and I am pretty certain that understanding their perspective if an important part of the answer. If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship. +> I['m] sorry it made you sad. From my perspective, the question is no[t] "can we still be friends with such people", but "how can we still be friends with such people" and I am pretty certain that understanding their perspective [is] an important part of the answer. If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship. ------ @@ -171,17 +173,17 @@ I emailed the coordination group about the thread, on the grounds that gauging t "Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna (Subject: "uh, guys???", 20 July 2019) that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR _for_? -[I replied to Ruby that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether or not you respect them as a thinker is _off-topic_. "You said X, but this is wrong because of Y" isn't a personal attack! +[I replied to Ruby that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack! -Jessica said that there's no point in getting mad at [MOPs](http://benjaminrosshoffman.com/construction-beacons/). I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was _explicitly_ about causal _vs._ social reality; it's possible that I wouldn't be inclined to be such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt. +Jessica said that there's no point in getting mad at [MOPs](http://benjaminrosshoffman.com/construction-beacons/). I said I was a _little_ bit mad, because I specialized in discourse strategies that were susceptible to getting trolled like this. I thought it was ironic that this happened on a post that was _explicitly_ about causal _vs._ social reality; it's possible that I wouldn't have been such a hardass about "whether or not I respect you is off-topic" if it weren't for that prompt. -Jessica ended up writing a post, ["Self-Consciousness Wants Everything to Be About Itself"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself), arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings, using as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community. +Jessica ended up writing a post, ["Self-Consciousness Wants Everything to Be About Itself"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself), arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings. She used as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community. -Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in a subsequent apology to me](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post), and [an exchange between Ray Arnold (also a mod) and Ruby that she thought was "surprisingly okay"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself?commentId=EW3Mom9qfoggfBicf). +Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in a subsequent apology to me](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post) and by [an exchange between Ray and Ruby that she thought was "surprisingly okay"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself?commentId=EW3Mom9qfoggfBicf). -From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it could help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better. +From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it can help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better. -Michael said that part of the reason this worked was because it represented a clear threat to scapegoat, while also _not_ scapegoating, and not surrendering the option to do so later; it was significant that Jessica's choice of example positioned her on the side of the powerful social-justice coalition. +Michael said that part of the reason this worked was because it represented a clear threat of scapegoating without actually scapegoating, without surrendering the option to do so later; it was significant that Jessica's choice of example positioned her on the side of the powerful social-justice coalition. ------ @@ -189,23 +191,23 @@ On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist B ------- -Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent prominence of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. +Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent prominence of "short" (_e.g._, 2030) AI timelines was better explained by political factors than by technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. -(Remember, this was 2019. After seeing what GPT-3, [DALL-E](https://openai.com/research/dall-e), [PaLM](https://arxiv.org/abs/2204.02311), _&c._ could do during the "long May 2020", it's now looks to me that the short-timelines people had better intuitions than Jessica gave them credit for.) +(Remember, this was 2019. After seeing what GPT-3, [DALL-E](https://openai.com/research/dall-e), [PaLM](https://arxiv.org/abs/2204.02311), _&c._ could do during the "long May 2020", it now looks to me that the short-timelines people had better intuitions than Jessica gave them credit for.) -I still sympathized with the pushback from Caliphate supporters against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a boring semantic argument, but I feared that until we invented better linguistic technology, the boring semantic argument was going to continue sucking up discussion bandwidth with others when it didn't need to. +I still sympathized with the pushback from Caliphate supporters against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a boring semantic argument, but I feared that until we invented better linguistic technology, the boring semantic argument was going to continue sucking up discussion bandwidth with others. "Am I being too tone-policey here?" I asked the coordination group. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!") -Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?" Investigations of financial fraud focused on promises about money being places being false because the money was not in fact in those places, rather than the psychological minutiæ of the perp's exact motives. +Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?" He argued that investigations of financial fraud focus on false promises about money, rather than the psychological minutiæ of the perp's motives. -I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is taking on some nonzero risk of committing vehicular manslaughter.) The manslaughter case was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the weak principle that intent matters, sometimes. +I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") and premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is taking on some nonzero risk of committing vehicular manslaughter.) The manslaughter example was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the weak principle that intent sometimes matters. [^manslaughter-disanalogy]: For one extremely important disanalogy, perps don't _gain_ from committing manslaughter. -Ben replied that what mattered in the determination of manslaughter _vs._ murder was whether there was long-horizon optimization power towards the outcome of someone's death; it wasn't about what sentiments the killer rehearsed in their working memory. +Ben replied that what mattered in the determination of manslaughter _vs._ murder was whether there was long-horizon optimization power toward the outcome of someone's death, not what sentiments the killer rehearsed in their working memory. -On a phone call, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope had to know on some level that it wasn't true. (I agreed that this usage of _fraud_ made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that. +Michael made an analogy between EA and Catholicism on a phone call later. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope had to know on some level that it wasn't true. (I agreed that this usage of "fraud" made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that. ------ @@ -229,27 +231,27 @@ Ray Arnold replied: Even with the qualifier, I still think this deserves a "(!!)". -Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene was still just a philosophy club, catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it strictly _more_ important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory behavior which philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately punish people for getting caught up in obfuscatory patterns—that would just increase the incentive to obfuscate—but we did need some way to reveal what was going on. +Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene were still just a philosophy club. This was catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it _more_ important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory behavior that philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately punish people for getting caught up in obfuscatory patterns; that would just increase the incentive to obfuscate. But we did need some way to reveal what was going on. -In email, Jessica acknowledged that Ray had a point that it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate, "You don't have the option of non-engagement with the complaints that are being made." (Courts can _summon_ people; you can't ignore a court summons the way you can ignore ordinary critics.) +In email, Jessica acknowledged that Ray had a point: it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate "You don't have the option of non-engagement with the complaints that are being made." (Courts can _summon_ people; you can't ignore a court summons the way you can ignore ordinary critics.) -Michael said that we should also develop skill in using social-justicey blame language, as was used against us, harder, while we were still acting under mistake-theoretic assumptions. "Riley" said that this was a terrifying you-have-become-the-abyss suggestion; Ben thought it was obviously a good idea. +Michael said that we should also develop skill in using social-justicey blame language, as was used against us, harder, while we still thought of ourselves as [trying to correct people's mistakes rather than being in a conflict](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) against the Blight. "Riley" said that this was a terrifying you-have-become-the-abyss suggestion; Ben thought it was obviously a good idea. -I was pretty horrified by the extent to which _Less Wrong_ moderators (!!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic Something to Protect, as a simple matter of Bay Area political correctness; I was happy to have Michael/Ben/Jessica as allies, but I wasn't _seeing_ the Blight as a unified problem. Now ... I was seeing _something_. +I was horrified by the extent to which _Less Wrong_ moderators (!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic Something to Protect as a simple matter of Bay Area political correctness. I was happy to have Michael/Ben/Jessica as allies, but I hadn't been seeing the Blight as a unified problem. Now I was seeing _something_. -An in-person meeting was arranged on 23 July 2019 at the _Less Wrong_ office, with Ben, Jessica, me, and most of the _Less Wrong_ team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to faithfully recount it.[^memory] I ended up crying at one point and left the room for a while. +An in-person meeting was arranged for 23 July 2019 at the _Less Wrong_ office, with Ben, Jessica, me, and most of the _Less Wrong_ team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to faithfully recount it.[^memory] I ended up crying at one point and left the room for a while. -[^memory]: An advantage of mostly living on the internet is that I have logs of the important things. I'm only able to tell this Whole Dumb Story with as much fidelity as I am, because for most of it, I can go back and read the emails and chatlogs from the time. Now that [audio transcription has fallen to AI](https://openai.com/blog/whisper/), maybe I should be recording more real-life conversations? In the case of this meeting, supposedly one of the _Less Wrong_ guys was recording, but no one had it when I asked in October 2022. +[^memory]: An advantage of mostly living on the internet is that I have logs of the important things. I'm only able to tell this Whole Dumb Story with this much fidelity because for most of it, I can go back and read the emails and chatlogs from the time. Now that [audio transcription has fallen to AI](https://openai.com/blog/whisper/), maybe I should be recording more real-life conversations? In the case of this meeting, supposedly one of the _Less Wrong_ guys was recording, but no one had it when I asked in October 2022. -The next day, I asked Ben and Jessica for their takeaways via email (Subject: "peace talks outcome?"). Jessica said that I was a "helpful emotionally expressive and articulate victim", that there seemed to be a consensus that it would be better if people like me could be warned somehow that _Less Wrong_ wasn't doing the general sanity-maximization thing anymore. (Because community leaders were willing to sacrifice, for example, ability to discuss non-AI heresies, in order to focus on sanity about AI in particular while maintaining enough mainstream acceptability and power.) +The next day, I asked Ben and Jessica for their takeaways via email (Subject: "peace talks outcome?"). Jessica said that I was a "helpful emotionally expressive and articulate victim" and that there seemed to be a consensus that people like me should be warned somehow that _Less Wrong_ wasn't doing fully general sanity-maximization anymore. (Because community leaders were willing to sacrifice, for example, ability to discuss non-AI heresies in order to focus on sanity about AI in particular while maintaining enough mainstream acceptability and power.) I said that for me and my selfish perspective, the main outcome was finally shattering my "rationalist" social identity. I needed to exhaust all possible avenues of appeal before it became real to me. The morning after was the first for which "rationalists ... them" felt more natural than "rationalists ... us". ------- -Michael's reputation in "the community", already not what it once was, continued to be debased even further. +Michael's reputation in the community, already not what it once was, continued to be debased even further. -The local community center, the Berkeley REACH,[^reach-acronym-expansion] was conducting an investigation as to whether to exclude Michael (which was mostly moot, as he didn't live in the Bay Area anyway). When I heard that the committee conducting the investigation was "very close to releasing a statement", I wrote to them: +The local community center, the Berkeley REACH,[^reach-acronym-expansion] was conducting an investigation as to whether to exclude Michael (which was mostly moot, as he didn't live in the Bay Area). When I heard that the committee conducting the investigation was "very close to releasing a statement", I wrote to them: [^reach-acronym-expansion]: Rationality and Effective Altruism Community Hub @@ -263,27 +265,27 @@ I replied: > Allow me to rephrase my question about charges. What are the reasons that the safety of the space and the community require you to write a report about Michael? To be clear, a community that excludes Michael on inadequate evidence is one where _I_ feel unsafe. -We arranged a call, during which I angrily testified that Michael was no threat to the safety of the space and the community—which would have been a bad idea if it were the cops, but in this context, I figured my political advocacy couldn't hurt. +We arranged a call, during which I angrily testified that Michael was no threat to the safety of the space and the community. This would have been a bad idea if it were the cops, but in this context, I figured my political advocacy couldn't hurt. -Concurrently, I got into an argument with Kelsey Piper about Michael, after she had written on Discord that her "impression of _Vassar_'s threatening schism is that it's fundamentally about Vassar threatening to stir shit up until people stop socially excluding him for his bad behavior". I didn't think that was what the schism was about (Subject: "Michael Vassar and the theory of optimal gossip"). +Concurrently, I got into an argument with Kelsey Piper about Michael after she wrote on Discord that her "impression of _Vassar_'s threatening schism is that it's fundamentally about Vassar threatening to stir shit up until people stop socially excluding him for his bad behavior." I didn't think that was what the schism was about (Subject: "Michael Vassar and the theory of optimal gossip"). -In the course of litigating Michael's motivations (the details of which are not interesting enough to summarize here), Kelsey mentioned that she thought Michael had done immense harm to me: that my models of the world and ability to reason were worse than they were a year ago. I thanked her for the concern, and asked if she could be more specific. +In the course of litigating Michael's motivations (the details of which are not interesting enough to summarize here), Kelsey mentioned that she thought Michael had done immense harm to me—that my models of the world and ability to reason were worse than they were a year ago. I thanked her for the concern, and asked if she could be more specific. -She said she was referring to my ability to predict consensus and what other people believe. I expected arguments to be convincing to other people which the other people found, not just not convincing, but so obviously not convincing that it was confusing I bothered raising them. I believed things to be in obvious violation of widespread agreement, when everyone else thought it wasn't. My shocked indignation at other people's behavior indicated a poor model of social reality. +She said she was referring to my ability to predict consensus and what other people believe. I expected people to be convinced by arguments that they found not only unconvincing, but so unconvincing they didn't see why I would bother. I believed things to be in obvious violation of widespread agreement that everyone else thought were not. My shocked indignation at other people's behavior indicated a poor model of social reality. -I considered this an insightful observation about a way in which I'm socially retarded. I had had [similar](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/) [problems](http://zackmdavis.net/blog/2012/07/trying-to-buy-a-lamp/) [with](http://zackmdavis.net/blog/2012/12/draft-of-a-letter-to-a-former-teacher-which-i-did-not-send-because-doing-so-would-be-a-bad-idea/) [school](http://zackmdavis.net/blog/2013/03/strategy-overhaul/). We're told that the purpose of school is education (to the extent that most people think of _school_ and _education_ as synonyms), but the consensus behavior is "sit in lectures and trade assignments for grades." Faced with what I saw as a contradiction between the consensus narrative and the consensus behavior, I would assume that the narrative was the "correct" version, and so I spent a lot of time trying to start conversations about math with everyone and then getting outraged and indignant when they'd say, "What class is this for?" Math isn't for classes; it's the other way around, right? +I considered this an insightful observation about a way in which I'm socially retarded. I had had [similar](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/) [problems](http://zackmdavis.net/blog/2012/07/trying-to-buy-a-lamp/) [with](http://zackmdavis.net/blog/2012/12/draft-of-a-letter-to-a-former-teacher-which-i-did-not-send-because-doing-so-would-be-a-bad-idea/) [school](http://zackmdavis.net/blog/2013/03/strategy-overhaul/). We're told that the purpose of school is education (to the extent that most people think of _school_ and _education_ as synonyms), but the consensus behavior is "sit in lectures and trade assignments for grades." Faced with what I saw as a contradiction between the consensus narrative and the consensus behavior, I would assume that the narrative was the "correct" version, and so I spent a lot of time trying to start conversations about math with everyone and then getting indignant when they'd say, "What class is this for?" Math isn't for classes; it's the other way around, right? -Empirically, not right! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are [mistakenly](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) failing to live up to the narrative" and "[Everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing. +Empirically, no! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are [mistakenly](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) failing to live up to the narrative" and "[Everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing. -It was the same thing here. Kelsey said that it was completely predictable that Yudkowsky wouldn't make a public statement, even one as uncontroversial as "category boundaries should be drawn for epistemic and not instrumental reasons", because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion in April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[^statement] +It was the same thing here. Kelsey said that it was predictable that Yudkowsky wouldn't make a public statement, even one as uncontroversial as "category boundaries should be drawn for epistemic and not instrumental reasons," because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion in April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[^statement] -[^statement]: I thought it was odd that Kelsey seemed to think the issue was that me and my allies were pressuring Yudkowsky to make a public statement, which he supposedly never does. From our perspective, the issue was that he _had_ made a statement, and it was wrong. +[^statement]: Oddly, Kelsey seemed to think the issue was that my allies and I were pressuring Yudkowsky to make a public statement, which he supposedly never does. From our perspective, the issue was that he _had_ made a statement and it was wrong. -Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not optimized to respond to the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But that being the case, I had no reason to care what Eliezer Yudkowsky said, because not-provoking-SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise _to me_, even if Kelsey knew better. +Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But if so, I had no reason to care what Eliezer Yudkowsky said; not provoking SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise to me, even if Kelsey knew better. -What Kelsey saw as "Zack is losing his ability to model other people and I'm worried about him", I thought Ben and Jessica would see as "Zack is angry about living in [simulacrum level 3](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/) and we're worried about _everyone else_." +What Kelsey saw as "Zack is losing his ability to model other people and I'm worried about him," I thought Ben and Jessica would see as "Zack is angry about living in [simulacrum level 3](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/) and we're worried about _everyone else_." -I did think that Kelsey was mistaken about how much causality to attribute to Michael's influence, rather than me already being socially retarded. From my perspective, validation from Michael was merely the catalyst that excited me from confused-and-sad to confused-and-socially-aggressive-about-it. The social-aggression phase revealed a lot of information—not just to me. Now I was ready to be less confused—after I was done grieving. +I did think that Kelsey was mistaken about how much causality to attribute to Michael's influence, rather than to me already being socially retarded. From my perspective, validation from Michael was merely the catalyst that excited me from confused-and-sad to confused-and-socially-aggressive-about-it. The latter phase revealed a lot of information, and not just to me. Now I was ready to be less confused—after I was done grieving. Later, talking in person at "Arcadia", Kelsey told me that someone whose identity she would not disclose had threatened to sue over the report about Michael, so REACH was delaying its release for the one-year statute of limitations. As far as my interest in defending Michael went, I counted this as short-term good news (because the report wasn't being published) but longer-term bad news (because the report must be a hit piece if Michael's mysterious ally was trying to hush it). @@ -291,7 +293,7 @@ When I mentioned this to Michael on Signal on 3 August 2019, he replied: > The person is me, the whole process is a hit piece, literally, the investigation process and not the content. Happy to share the latter with you. You can talk with Ben about appropriate ethical standards. -In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. I count this kind of situation as another reason to be [annoyed at how norms protecting confidentiality](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#privacy-constraints) distort information; Kelsey apparently felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person (because I naïvely assumed that if Michael had been the person who threatened to sue, Kelsey would have said that). I can't say I never introduce this kind of distortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it. +In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. This kind of situation is an example of [how norms protecting confidentiality](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#privacy-constraints) distort information; Kelsey felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person. I can't say I never introduce this kind of distortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it. As far as appropriate ethical standards go, I didn't approve of silencing critics with lawsuit threats, even while I agreed with Michael that "the process is the punishment." I imagine that if the REACH wanted to publish a report about me, I would expect to defend myself in public, having faith that the [beautiful weapon](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) of my Speech would carry the day against a corrupt community center—or for that matter, against /r/SneerClub. -- 2.17.1