From: Zack M. Davis Date: Wed, 25 Oct 2023 03:15:48 +0000 (-0700) Subject: memoir: pt. 3 editing sweep cont'd; eye to second batch X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=86c65be587ae548d27e8e5a5de7ca01d84614c9f;p=Ultimately_Untrue_Thought.git memoir: pt. 3 editing sweep cont'd; eye to second batch If I keep my focus on polishing pt. 3–5, I'll be able to ship them to pre-readers, and then the world. --- diff --git a/content/2023/a-hill-of-validity-in-defense-of-meaning.md b/content/2023/a-hill-of-validity-in-defense-of-meaning.md index 8a48cd5..3047bd3 100644 --- a/content/2023/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/2023/a-hill-of-validity-in-defense-of-meaning.md @@ -406,7 +406,7 @@ I settled on Sara Bareilles's ["Gonna Get Over You"](https://www.youtube.com/wat Meanwhile, my email thread with Scott started up again. I expressed regret that all the times I had emailed him over the past couple years had been when I was upset about something (like [psych hospitals](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/), or—something else) and wanted something from him, treating him as a means rather than an end—and then, despite that regret, I continued prosecuting the argument. -One of Alexander's [most popular _Less Wrong_ posts ever had been about the noncentral fallacy, which Alexander called "the worst argument in the world"](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world): those who (for example) crow that abortion is _murder_ (because murder is the killing of a human being), or that Martin Luther King, Jr. was a _criminal_ (because he defied the segregation laws of the South), are engaging in a dishonest rhetorical maneuver in which they're trying to trick their audience into assigning attributes of the typical "murder" or "criminal" to what are very noncentral members of those categories. +One of Alexander's [most popular _Less Wrong_ posts ever had been about the noncentral fallacy, which Alexander called "the worst argument in the world"](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world): those who (for example) crow that abortion is _murder_ (because murder is the killing of a human being), or that Martin Luther King, Jr. was a _criminal_ (because he defied the segregation laws of the South), are engaging in a dishonest rhetorical maneuver in which they're trying to trick their audience into assigning attributes of the typical "murder" or "criminal" to what are very noncentral members of those categories. Even if you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Fiona a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations on the basis of what the typical "murder" is like—expectations about Fiona's moral character, about the suffering of a victim whose hopes and dreams were cut short, about Fiona's relationship with the law, _&c._—most of which get violated when you reveal that the murder victim was an embryo. diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 8bbbd0c..62fd4fd 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -25,7 +25,7 @@ I had been hyperfocused on prosecuting my Category War, but the reason Michael V [^posse-boundary]: Sarah Constantin and "Riley" had also been involved in reaching out to Yudkowsky, and were included in many subsequent discussions, but seemed like more marginal members of the group that was forming. -Ben had previously worked at GiveWell and had written a lot about problems with the effective altruism movement, in particular, EA-branded institutions making [incoherent](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [decisions](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) under the influence of incentives to [distort](http://benjaminrosshoffman.com/humility-argument-honesty/) [information](http://benjaminrosshoffman.com/honesty-and-perjury/) [in order to](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) [seek](http://benjaminrosshoffman.com/against-neglectedness/) [control](http://benjaminrosshoffman.com/against-responsibility/). Jessica had previously worked at MIRI, where she was unnerved by under-evidenced paranoia about secrecy and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), and would later [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards) her experiences there. To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? +Ben had previously worked at GiveWell and had written a lot about problems with the effective altruism (EA) movement, in particular, EA-branded institutions making [incoherent](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [decisions](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) under the influence of incentives to [distort](http://benjaminrosshoffman.com/humility-argument-honesty/) [information](http://benjaminrosshoffman.com/honesty-and-perjury/) [in order to](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) [seek](http://benjaminrosshoffman.com/against-neglectedness/) [control](http://benjaminrosshoffman.com/against-responsibility/). Jessica had previously worked at MIRI, where she was unnerved by under-evidenced paranoia about secrecy and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), and would later [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards) her experiences there. To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? If there was a real problem, I didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt), rather than having the kind of robust, precise representation a well-designed AI could compute plans with. @@ -115,7 +115,7 @@ The ensuring trainwreck got so bad that the mods manually [moved the comments to [^yes-requires-slapfight-highlights]: I particularly appreciated Said Achmiz's [defense of disregarding community members' feelings](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=EsSdLMrFcCpSvr3pG), and [Ben's commentary on speech acts that lower the message length of proposals to attack some group](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=TXbgr7goFtSAZEvZb). -On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The moderator who wrote the draft [was persuaded](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." +On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, [was persuaded](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." But winning "victories" wasn't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" onto my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what one's real work was. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software." @@ -129,17 +129,13 @@ Math and Wellness Month ended up being mostly a failure: the only math I ended u In June 2019, I made [a linkpost on _Less Wrong_](https://www.lesswrong.com/posts/5nH5Qtax9ae8CQjZ9/tal-yarkoni-no-it-s-not-the-incentives-it-s-you) to Tal Yarkoni's ["No, It's Not The Incentives—It's you"](https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/), about how professional scientists should stop using career incentives as an excuse for doing poor science. It generated a lot of discussion. -Looking over the thread in retrospect, [these words from David Xu seem significant](https://www.lesswrong.com/posts/5nH5Qtax9ae8CQjZ9/tal-yarkoni-no-it-s-not-the-incentives-it-s-you?commentId=qDPdneAQ4s7HMt3ys): - -> _We all know that falsifying data is bad._ But if that's the way the incentives point (and that's a very important if!), then it's _also_ bad to call people out for doing it. If you do that, then you're using moral indignation as a weapon—a way to not only coerce other people into using up their willpower, but to come out of it looking good yourself. - In an email (Subject: "LessWrong.com is dead to me"), Jessica identified the thread as her last straw: > LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win. > > Trying to get LessWrong.com to adopt high-integrity norms is going to fail, hard, without a _lot_ of conflict. (Enforcing high-integrity norms is like violence; if it doesn't work, you're not doing enough of it). People who think being exposed as fraudulent (or having their friends exposed as fraudulent) is a terrible outcome, are going to actively resist high-integrity discussion norms. -Posting on _Less Wrong_ made sense as harm-reduction, but the only way to get people to stick up for truth would be to convert them to _a whole new worldview_, which would require a lot of in-person discussions. She bought up the idea of starting a new forum to replace _Less Wrong_. +Posting on _Less Wrong_ made sense as harm-reduction, but the only way to get people to stick up for truth would be to convert them to a whole new worldview, which would require a lot of in-person discussions. She bought up the idea of starting a new forum to replace _Less Wrong_. Ben said that trying to discuss with the _Less Wrong_ mod team would be a good intermediate step, after we clarified to ourselves what was going on; it might be "good practice in the same way that the Eliezer initiative was good practice." The premise should be, "If this is within the Overton window for _Less Wrong_ moderators, there's a serious confusion on the conditions required for discourse", not on scapegoating individuals. He was less optimistic about harm-reduction; participating on the site was implicitly endorsing it by submitting the rule of the karma and curation systems. @@ -149,23 +145,13 @@ Ben said that trying to discuss with the _Less Wrong_ mod team would be a good i ------ -I got into a scuffle with Ruby (someone who had newly joined the _Less Wrong_ mod team) on his post on ["Causal Reality _vs_. Social Reality"](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality). One section of the post asks, "Why people aren't clamoring in the streets for the end of sickness and death?" and gives the answer that it's because no one else is; people live in a social reality that accepts death as part of the natural order, even though life extension seems like it should be physically possible in causal reality. - -I didn't think this was a good example. "Clamoring in the streets" (even if you interpreted it as a metonym for other forms of mass political action) seemed like the kind of thing that would be recommended by social-reality thinking, rather than causal-reality thinking. How, causally, would the action of clamoring in the streets lead to the outcome of the end of sickness and death? I would expect means–end reasoning about causal reality to instead recommend things like working on or funding biomedical research. - -Ruby [complained that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=7b2pWiCL33cqhTabg) my tone was too combative, and asked for more charity and collaborative truth-seeking[^collaborative-truth-seeking] in any future comments. +I got into a scuffle with Ruby Bloom on his post on ["Causal Reality _vs_. Social Reality"](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality). I wrote [what I thought was a substantive critique](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=X8u8ozpvhwcK4GskA), but Ruby [complained that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=7b2pWiCL33cqhTabg) my tone was too combative, and asked for more charity and collaborative truth-seeking[^collaborative-truth-seeking] in any future comments. [^collaborative-truth-seeking]: [No one ever seems to be able to explain to me what this phrase means.](https://www.lesswrong.com/posts/uvqd3YiBcrPxXzxQM/what-does-the-word-collaborative-mean-in-the-phrase) (My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the _Less Wrong_ FAQ. Maybe he couldn't let me "win" again so quickly?) -I emailed the coordination group about it, on the grounds that gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices: - -> he seems to be conflating transhumanist optimism with "causal reality", and then tone-policing me when I try to model good behavior of what means-end reasoning about causal reality actually looks like. This ... seems pretty cultish to me?? Like, it's fine and expected for this grade of confusion to be on the website, but it's more worrisome when it's coming from the mod team.[^rot-13] - -[^rot-13]: This part of the email was actually [rot-13'd](https://rot13.com) to let people write up their independent component without being contaminated by me; I reproduce the plaintext here. - -The meta-discussion on _Less Wrong_ started to get heated. Ruby claimed: +I emailed the coordination group about the thread, on the grounds that gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices. Meanwhile on _Less Wrong_, Ruby kept doubling down: > [I]f the goal is everyone being less wrong, I think some means of communicating are going to be more effective than others. I, at least, am a social monkey. If I am bluntly told I am wrong (even if I agree, even in private—but especially in public), I will feel attacked (if only at the S1 level), threatened (socially), and become defensive. It makes it hard to update and it makes it easy to dislike the one who called me out. [...] > @@ -173,9 +159,7 @@ The meta-discussion on _Less Wrong_ started to get heated. Ruby claimed: > > Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express "I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] There's an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't. -"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. - -(I would later complain to Anna (Subject: "uh, guys???", 20 July 2019) that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from _veteran_ CfAR participants, what was CfAR _for_?) +"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna (Subject: "uh, guys???", 20 July 2019) that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR _for_? [I replied to Ruby that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether or not you respect them as a thinker is _off-topic_. "You said X, but this is wrong because of Y" isn't a personal attack! @@ -183,7 +167,7 @@ Jessica said that there's no point in getting mad at [MOPs](http://benjaminrossh Jessica ended up writing a post, ["Self-Consciousness Wants Everything to Be About Itself"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself), arguing that tone arguments are mainly about people silencing discussion of actual problems in order to protect their feelings, using as a central example a case study of a college official crying and saying that she "felt attacked" in response to complaints about her office being insufficiently supportive of a racial community. -Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in an apology to me](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post), and [an exchange between Raemon (also a mod) and Ruby that she thought was "surprisingly okay"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself?commentId=EW3Mom9qfoggfBicf). +Jessica was surprised by how well it worked, judging by [Ruby mentioning silencing in a subsequent apology to me](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=wfzxj4GGRtZGMG9ni) (plausibly influenced by Jessica's post), and [an exchange between Raemon (also a mod) and Ruby that she thought was "surprisingly okay"](https://www.lesswrong.com/posts/bwkZD6uskCQBJDCeC/self-consciousness-wants-to-make-everything-about-itself?commentId=EW3Mom9qfoggfBicf). From this, Jessica derived the moral that when people are doing something that seems obviously terrible and in bad faith, it could help to publicly explain why the abstract thing is bad, without accusing anyone. This made sense because people didn't want to be held to standards that other people aren't being held to: a call-out directed at oneself personally could be selective enforcement, but a call-out of the abstract pattern invited changing one's behavior if the new equilibrium looked better. @@ -195,27 +179,27 @@ On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist B ------- -Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent popularity of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. +Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent prominence of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. -(Remember, this was 2019. After seeing what GPT-3, [DALL-E](https://openai.com/research/dall-e), [PaLM](https://arxiv.org/abs/2204.02311), _&c._ could do during the "long May 2020", it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for.) +(Remember, this was 2019. After seeing what GPT-3, [DALL-E](https://openai.com/research/dall-e), [PaLM](https://arxiv.org/abs/2204.02311), _&c._ could do during the "long May 2020", it's now looks to me that the short-timelines people had better intuitions than Jessica gave them credit for.) -I still sympathized with the "mainstream" pushback against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a _boring_ semantic argument, but I feared that until we invented better linguistic technology, the _boring_ semantic argument was going to _continue_ sucking up discussion bandwidth with others when it didn't need to. +I still sympathized with the pushback from Caliphate supporters against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a boring semantic argument, but I feared that until we invented better linguistic technology, the boring semantic argument was going to continue sucking up discussion bandwidth with others when it didn't need to. "Am I being too tone-policey here?" I asked the coordination group. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!") -Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp." +Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?" Investigations of financial fraud focused on promises about money being places being false because the money was not in fact in those places, rather than the psychological minutiæ of the perp's exact motives. -I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is probably going to have unlucky analogues in nearby possible worlds who are guilty of vehicular manslaughter.) The manslaughter case was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the extremely weak principle that intent matters, sometimes. +I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is probably going to have unlucky analogues in nearby possible worlds who are guilty of vehicular manslaughter.) The manslaughter case was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the weak principle that intent matters, sometimes. [^manslaughter-disanalogy]: For one extremely important disanalogy, perps don't _gain_ from committing manslaughter. Ben replied that what mattered in the determination of manslaughter _vs._ murder was whether there was long-horizon optimization power towards the outcome of someone's death; it wasn't about what sentiments the killer rehearsed in their working memory. -On a phone call, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope knew (he _had_ to know, at some level) that it asn't true. (I agreed that this usage of _fraud_ made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that. +On a phone call, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope had to know on some level that it wasn't true. (I agreed that this usage of _fraud_ made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they [had to know at some level](https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/) that they weren't doing that. ------ -Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft _Less Wrong_ post by some of our posse members and some of the _Less Wrong_ mods. (The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party [GreaterWrong](https://www.greaterwrong.com/) site; I [filed a bug](https://github.com/LessWrong2/Lesswrong2/issues/2161).) +Ruby wrote a document about ways in which one's speech could harm people, which was discussed in the comments of a draft _Less Wrong_ post by some of our posse members and some of the _Less Wrong_ mods. (The draft was hidden, but the API apparently didn't filter out comments on hidden posts, and the thread was visible on the third-party [GreaterWrong](https://www.greaterwrong.com/) site; I [filed a bug](https://github.com/ForumMagnum/ForumMagnum/issues/2161).) Ben wrote: @@ -235,7 +219,7 @@ Ray Arnold (another _Less Wrong_ mod) replied: Even with the qualifier, I still think this deserves a "(!!)". -Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene was still just a philosophy club, catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it strictly _more_ important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory action, which philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately _punish_ people for getting caught up in obfuscatory patterns—that would just increase the incentive to obfuscate—but we did need some way to reveal what was going on. +Jessica pointed this out as a step towards discussing the real problem (Subject: "progress towards discussing the real thing??"). She elaborated in the secret thread: now that the "EA" scene was adjacent to real-world money and power, people were incentivized to protect their reputations (and beliefs related to their reputations) in anti-epistemic ways, in a way that they wouldn't if the scene was still just a philosophy club, catalyzing a shift of norms from "that which can be destroyed by the truth, should be" towards protecting feelings—where "protecting feelings" was actually about protecting power. The fact that the scene was allocating billions of dollars made it strictly _more_ important for public discussions to reach the truth, compared to philosophy club—but it also increased the likelihood of obfuscatory behavior which philosophy-club norms (like "assume good faith") didn't account for. We might need to extend philosophy-club norms to take into account the possibility of adversarial action: there's a reason that courts of law don't assume good faith. We didn't want to disproportionately punish people for getting caught up in obfuscatory patterns—that would just increase the incentive to obfuscate—but we did need some way to reveal what was going on. In email, Jessica acknowledged that Ray had a point that it was confusing to use court-inspired language if we didn't intend to blame and punish people. Michael said that court language was our way to communicate, "You don't have the option of non-engagement with the complaints that are being made." (Courts can _summon_ people; you can't ignore a court summons the way you can ignore ordinary critics.) @@ -243,9 +227,9 @@ Michael said that we should also develop skill in using social-justicey blame la I was pretty horrified by the extent to which _Less Wrong_ moderators (!!) seemed to be explicitly defending "protect feelings" norms. Previously, I had mostly been seeing the present struggle through the lens of my idiosyncratic Something to Protect, as a simple matter of Bay Area political correctness; I was happy to have Michael/Ben/Jessica as allies, but I wasn't _seeing_ the Blight as a unified problem. Now ... I was seeing _something_. -An in-person meeting was arranged on 23 July 2019 at the _Less Wrong_ office, with Ben, Jessica, me, and most of the _Less Wrong_ team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to recount it.[^memory] I ended up crying at one point and left the room for a while. +An in-person meeting was arranged on 23 July 2019 at the _Less Wrong_ office, with Ben, Jessica, me, and most of the _Less Wrong_ team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to faithfully recount it.[^memory] I ended up crying at one point and left the room for a while. -[^memory]: An advantage of the important parts of my life taking place on the internet is that I have _logs_ of the important things; I'm only able to tell this Whole Dumb Story with as much fidelity as I am, because for most of it, I can go back and read the emails and chatlogs from the time. Now that [audio transcription has fallen to AI](https://openai.com/blog/whisper/), should I be recording more real-life conversations?? In this case of this meeting, supposedly one of the _Less Wrong_ guys was recording, but no one had it when I asked in October 2022. +[^memory]: An advantage of mostly living on the internet is that I have _logs_ of the important things; I'm only able to tell this Whole Dumb Story with as much fidelity as I am, because for most of it, I can go back and read the emails and chatlogs from the time. Now that [audio transcription has fallen to AI](https://openai.com/blog/whisper/), should I be recording more real-life conversations?? In the case of this meeting, supposedly one of the _Less Wrong_ guys was recording, but no one had it when I asked in October 2022. The next day, I asked Ben and Jessica for their takeaways via email (Subject: "peace talks outcome?"). Jessica said that I was a "helpful emotionally expressive and articulate victim", that there seemed to be a consensus that it would be better if people like me could be warned somehow that _Less Wrong_ wasn't doing the general sanity-maximization thing anymore. (Because community leaders were willing to sacrifice, for example, ability to discuss non-AI heresies, in order to focus on sanity about AI in particular while maintaining enough mainstream acceptability and power.) @@ -255,7 +239,7 @@ I said that for me and my selfish perspective, the main outcome was finally shat Michael's reputation in "the community", already not what it once was, continued to be debased even further. -The local community center, the Berkeley REACH,[^reach-acronym-expansion] was conducting an investigation as to whether to exclude Michael (which was mostly moot, as he didn't live in the Bay Area anyway). When I heard that the subcommittee conducting the investigation was "very close to releasing a statement", I wrote to them: +The local community center, the Berkeley REACH,[^reach-acronym-expansion] was conducting an investigation as to whether to exclude Michael (which was mostly moot, as he didn't live in the Bay Area anyway). When I heard that the committee conducting the investigation was "very close to releasing a statement", I wrote to them: [^reach-acronym-expansion]: Rationality and Effective Altruism Community Hub @@ -273,13 +257,11 @@ We arranged a call, during which I angrily testified that Michael was no threat Concurrently, I got into an argument with Kelsey Piper about Michael, after she had written on Discord that her "impression of _Vassar_'s threatening schism is that it's fundamentally about Vassar threatening to stir shit up until people stop socially excluding him for his bad behavior". I didn't think that was what the schism was about (Subject: "Michael Vassar and the theory of optimal gossip"). -In the course of litigating Michael's motivations (the details of which are not interesting enough to summarize), Kelsey mentioned that she thought Michael had done immense harm to me: that my models of the world and ability to reason were worse than they were a year ago. I thanked her for the concern, and asked if she could be more specific. - -She said she was referring to my ability to predict consensus and what other people believe. I expected arguments to be convincing to other people which the other people found, not just not convincing, but also so obviously not convincing that it was confusing I bothered raising them. I believed things to be in obvious violation of widespread agreement, when everyone else thought it wasn't. My shocked indignation at other people's behavior indicated a poor model of social reality. +In the course of litigating Michael's motivations (the details of which are not interesting enough to summarize here), Kelsey mentioned that she thought Michael had done immense harm to me: that my models of the world and ability to reason were worse than they were a year ago. I thanked her for the concern, and asked if she could be more specific. -I considered this an insightful observation about a way in which I'm socially retarded. +She said she was referring to my ability to predict consensus and what other people believe. I expected arguments to be convincing to other people which the other people found, not just not convincing, but so obviously not convincing that it was confusing I bothered raising them. I believed things to be in obvious violation of widespread agreement, when everyone else thought it wasn't. My shocked indignation at other people's behavior indicated a poor model of social reality. -I had had [similar](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/) [problems](http://zackmdavis.net/blog/2012/07/trying-to-buy-a-lamp/) [with](http://zackmdavis.net/blog/2012/12/draft-of-a-letter-to-a-former-teacher-which-i-did-not-send-because-doing-so-would-be-a-bad-idea/) [school](http://zackmdavis.net/blog/2013/03/strategy-overhaul/). We're told that the purpose of school is education (to the extent that most people think of _school_ and _education_ as synonyms), but the consensus behavior is "sit in lectures and trade assignments for grades." Faced with what I saw as a contradiction between the consensus narrative and the consensus behavior, I would assume that the narrative was the "correct" version, and so I spent a lot of time trying to start conversations about math with everyone and then getting outraged and indignant when they'd say, "What class is this for?" Math isn't for classes; it's the other way around, right? +I considered this an insightful observation about a way in which I'm socially retarded. I had had [similar](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/) [problems](http://zackmdavis.net/blog/2012/07/trying-to-buy-a-lamp/) [with](http://zackmdavis.net/blog/2012/12/draft-of-a-letter-to-a-former-teacher-which-i-did-not-send-because-doing-so-would-be-a-bad-idea/) [school](http://zackmdavis.net/blog/2013/03/strategy-overhaul/). We're told that the purpose of school is education (to the extent that most people think of _school_ and _education_ as synonyms), but the consensus behavior is "sit in lectures and trade assignments for grades." Faced with what I saw as a contradiction between the consensus narrative and the consensus behavior, I would assume that the narrative was the "correct" version, and so I spent a lot of time trying to start conversations about math with everyone and then getting outraged and indignant when they'd say, "What class is this for?" Math isn't for classes; it's the other way around, right? Empirically, not right! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are [mistakenly](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) failing to live up to the narrative" and "[Everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing. @@ -287,7 +269,7 @@ It was the same thing here. Kelsey said that it was completely predictable that [^statement]: I thought it was odd that Kelsey seemed to think the issue was that me and my allies were pressuring Yudkowsky to make a public statement, which he never does. From our perspective, the issue was that he _had_ made a statement, and it was wrong. -Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not optimized to respond to the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But that being the case, I had no reason to care what Eliezer Yudkowsky says, because not-provoking-SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise _to me_, even if Kelsey knew better. +Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not optimized to respond to the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But that being the case, I had no reason to care what Eliezer Yudkowsky said, because not-provoking-SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise _to me_, even if Kelsey knew better. What Kelsey saw as "Zack is losing his ability to model other people and I'm worried about him", I thought Ben and Jessica would see as "Zack is angry about living in [simulacrum level 3](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/) and we're worried about _everyone else_." @@ -301,13 +283,13 @@ When I mentioned this to Michael on Signal on 3 August 2019, he replied: In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. I count this kind of situation as another reason to be [annoyed at how norms protecting confidentiality](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#privacy-constraints) distort information; Kelsey apparently felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person (because I naïvely assumed that if Michael had been the person who threatened to sue, Kelsey would have said that). I can't say I never introduce this kind of disortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it. -As far as appropriate ethical standards go, I didn't particularly approve of silencing critics with lawsuit threats, even while I agreed with Michael that "the process is the punishment." I imagine that if the REACH wanted to publish a report about me, I would expect to defend myself in public, having faith that Speech would carry the day against a corrupt community center—or for that matter, against /r/SneerClub. +As far as appropriate ethical standards go, I didn't approve of silencing critics with lawsuit threats, even while I agreed with Michael that "the process is the punishment." I imagine that if the REACH wanted to publish a report about me, I would expect to defend myself in public, having faith that the [beautiful weapon](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) of my Speech would carry the day against a corrupt community center—or for that matter, against /r/SneerClub. This is arguably one of my more religious traits. Michael and Kelsey are domain experts and probably know better. ------- -I wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained, bound by internal silencing-chains. So instead, I mostly turned to a combination of writing bitter and insulting comments whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging! +I wanted to finish the memoir-post mourning the "rationalists", but I still felt psychologically constrained, bound by internal silencing-chains. So instead, I mostly turned to a combination of writing [bitter](https://www.greaterwrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare/comment/Nhv9KPte7d5jbtLBv) and [insulting](https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7) [comments](https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE) whenever I saw someone praise the "rationalists" collectively, and—more philosophy blogging! In August 2019's ["Schelling Categories, and Simple Membership Tests"](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), I explained a nuance that had only merited a passion mention in "... Boundaries?": sometimes you might want categories for different agents to _coordinate_ on, even at the cost of some statistical "fit." (This was of course generalized from a "pro-trans" argument that had occured to me, [that self-identity is an easy Schelling point when different people disagree about what "gender" they perceive someone as](/2019/Oct/self-identity-is-a-schelling-point/).) @@ -315,7 +297,7 @@ In September 2019's ["Heads I Win, Tails?—Never Heard of Her; Or, Selective Re In October 2019's ["Algorithms of Deception!"](https://www.lesswrong.com/posts/fmA2GJwZzYtkrAKYJ/algorithms-of-deception), I exhibited some toy Python code modeling different kinds of deception. A function that faithfully passes observations it sees as input to another function, lets the second function constructing a well-calibrated probability distribution. But if the first function outright fabricates evidence, or selectively omits some evidence, or gerrymanders the categories by which it interprets its observations as evidence, the second function comes up with a worse (less accurate) probability distribution. -Also in October 2019, in ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist), I replied to Scott Alexander's ["Against Lie Inflation"](https://slatestarcodex.com/2019/07/16/against-lie-inflation/), which was itself a generalized rebuke of Jessica's "The AI Timelines Scam". Scott thought Jessica was wrong to use language like "lie", "scam", _&c._ to describe someone being (purportedly) motivatedly wrong, but not necessarily _consciously_ lying. +Also in October 2019, in ["Maybe Lying Doesn't Exist"](https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist), I replied to Scott Alexander's ["Against Lie Inflation"](https://slatestarcodex.com/2019/07/16/against-lie-inflation/), which was itself a generalized rebuke of Jessica's "The AI Timelines Scam". Scott thought Jessica was wrong to use language like "lie", "scam", _&c._ to describe someone being (purportedly) motivatedly wrong, but not necessarily consciously lying. I was _furious_ when "Against Lie Inflation" came out. (Furious at what I perceived as hypocrisy, not because I particularly cared about defending Jessica's usage.) Oh, so _now_ Scott agreed that making language less useful is a problem?! But on further consideration, I realized Alexander actually was being consistent in admitting appeals-to-consequences as legitimate. In objecting to the expanded definition of "lying", Alexander was counting "everyone is angrier" (because of more frequent lying-accusations) as a cost. Whereas on my philosophy, that wasn't a legitimate cost. (If everyone _is_ lying, maybe people _should_ be angry!) @@ -333,21 +315,31 @@ I continued to note signs of contemporary Yudkowsky not being the same author wh > I am actively hostile to neoreaction and the alt-right, routinely block such people from commenting on my Twitter feed, and make it clear that I do not welcome support from those quarters. Anyone insinuating otherwise is uninformed, or deceptive. -[I pointed out that](https://twitter.com/zackmdavis/status/1164259164819845120) the people who smear him as a right-wing Bad Guy do so _in order to_ extract these kinds of statements of political alignment as concessions; his own timeless decision theory would seem to recommend ignoring them rather than paying even this small [Danegeld](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/). +[I pointed out that](https://twitter.com/zackmdavis/status/1164259164819845120) the people who smear him as a right-wing Bad Guy do so in order to extract these kinds of statements of political alignment as concessions; his own timeless decision theory would seem to recommend ignoring them rather than paying even this small [Danegeld](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/). -When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is actually false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension), and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By [linking to](https://twitter.com/zackmdavis/status/1164259289575251968) ["Kolmogorov Complicity"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors? +When I emailed the posse about it begging for Likes (Subject: "can't leave well enough alone"), Jessica said she didn't get my point. If people are falsely accusing you of something (in this case, of being a right-wing Bad Guy), isn't it helpful to point out that the accusation is actually false? It seemed like I was advocating for self-censorship on the grounds that speaking up helps the false accusers. But it also helps bystanders (by correcting the misapprehension), and hurts the false accusers (by demonstrating to bystanders that the accusers are making things up). By [linking to](https://twitter.com/zackmdavis/status/1164259289575251968) ["Kolmogorov Complicity and the Parable of Lightning"](http://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/) in my replies, I seemed to be insinuating that Yudkowsky was under some sort of duress, but this wasn't spelled out: if Yudkowsky would face social punishment for advancing right-wing opinions, did that mean he was under such duress that saying anything at all would be helping the oppressors? The paragraph from "Kolmogorov Complicity" that I was thinking of was (bolding mine): > Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, "If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?" **Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won't be very convincing.** Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly. -I perceived a pattern where people who are in trouble with the orthodoxy feel an incentive to buy their own safety by denouncing _other_ heretics: not just disagreeing with the other heretics _because those other heresies are in fact mistaken_, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld. +I perceived a pattern where people who are in trouble with the orthodoxy feel an incentive to buy their own safety by denouncing other heretics: not just disagreeing with the other heretics because those other heresies are in fact mistaken, which would be right and proper Discourse, but denouncing them ("actively hostile to") as a way of paying Danegeld. + +Suppose there are five true heresies, but anyone who's on the record believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community, because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy-of-categorization right in full generality (even though his writings revealed an implicit understanding of the correct way)[^implicit-understanding], and he and I had a common enemy in the social-justice egregore). He couldn't afford to. He'd already spent his Overton budget [on anti-feminism](https://slatestarcodex.com/2015/01/01/untitled/). + +[^implicit-understanding]: As I had [explained to him earlier](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#noncentral-fallacy), Alexander's famous [post on the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) + +[TODO—stitch together the language here: + +Most of Alexander's examples focus on + +In ["Does the Glasgow Coma Scale exist? Do comas?"](https://slatestarcodex.com/2014/08/11/does-the-glasgow-coma-scale-exist-do-comas/) (published just three months before "... Not Man for the Categories"), Alexander -Suppose there are five true heresies, but anyone who's on the record believing more than one gets burned as a witch. Then it's impossible to have a unified rationalist community, because people who want to talk about one heresy can't let themselves be seen in the company of people who believe another. That's why Scott Alexander couldn't get the philosophy-of-categorization right in full generality (even though he'd [written](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) [exhaustively](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/) about the correct way, and he and I have a common enemy in the social-justice egregore): _he couldn't afford to_. He'd already [spent his Overton budget on anti-feminism](https://slatestarcodex.com/2015/01/01/untitled/). +] -Scott (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was _awfully_ tempting to try to jump there. (Of course, it would be _better_ if there was a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the falling back to the one-heresy-per-thinker equilibrium.) +Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the falling back to the one-heresy-per-thinker equilibrium.) -Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad (and that it would be bad if I were doing that without "intending" to). I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had _learned stuff_ from reading far-right authors (most notably Moldbug), and from talking with "Thomas". I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](#less-precise-is-more-violent) (when I was talking about criticizing "rationalists"). +Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad (and that it would be bad if I were doing that without "intending" to). I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors (most notably Moldbug), and from talking with "Thomas". I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](#less-precise-is-more-violent) (when I was talking about criticizing "rationalists"). Jessica asked if my opinion would change depending on whether Yudkowsky thought neoreaction was intellectually worth engaging with. (Yudkowsky [had said years ago](https://www.lesswrong.com/posts/6qPextf9KyWLFJ53j/why-is-mencius-moldbug-so-popular-on-less-wrong-answer-he-s?commentId=TcLhiMk8BTp4vN3Zs) that Moldbug was low quality.) diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 7ab4411..4d5f1dc 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1,36 +1,35 @@ -slotted TODO blocks— -- Eliezerfic fight conclusion -_ Keltham's section in dath ilan ancillary -_ objections and replies in dath ilan ancillary -_ kitchen knife section in dath ilan ancillary +first edit pass bookmark: "seemed to accept this as an inevitable background" blocks to fit somewhere— _ the hill he wants to die on (conclusion for "Zevi's Choice"??) -_ Tail vs. Bailey / Davis vs. Yudkowsky analogy (new block somewhere) +_ Tail vs. Bailey / Davis vs. Yudkowsky analogy (new block somewhere—or a separate dialogue post??) _ mention that "Not Man for the Categories" keeps getting cited -first edit pass bookmark: "In June 2019, I made" - pt. 3 edit tier— ✓ fullname Taylor and Hoffman at start of pt. 3 ✓ footnote clarifying that "Riley" and Sarah weren't core members of the group, despite being included on some emails? ✓ be more specific about Ben's anti-EA and Jessica's anti-MIRI things -_ Ben on "locally coherent coordination": use direct quotes for Ben's language—maybe rewrite in my own language (footnote?) as an understanding test -_ ask Sarah about context for "EA Has a Lying Problem"? +✓ weird that Kelsey thought the issue was that we were trying to get Yudkowsky to make a statement ✓ set context for Anna on first mention in the post ✓ more specific on "mostly pretty horrifying" and group conversation with the whole house ✓ cut words from the "Yes Requires" slapfight? +✓ cut words from "Social Reality" scuffle +✓ examples of "bitter and insulting" comments about rationalists +- Scott got comas right in the same year as "Categories" +----- +_ Ben on "locally coherent coordination": use direct quotes for Ben's language—maybe rewrite in my own language (footnote?) as an understanding test _ "Not the Incentives"—rewrite given that I'm not shielding Ray -_ cut many words from "Social Reality" scuffle +_ better explanation of MOPs in "Social Reality" scuffle _ better context on "scam" &c. earlier -_ meeting with Ray -_ Ben's "financial fraud don't inquire as to the conscious motives of the perp" claim may be false -_ later thoughts on jump to evaluation, translating between different groups' language +_ meeting with Ray? _ mention that I was miffed about "Boundaries?" not getting Curated, while one of Euk's animal posts did -_ examples of "bitter and insulting" comments about rationalists -_ cut words from descriptions of other posts! (if people want to read them, they can click through) -_ explicitly mention http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/ +_ establish usage of "coordination group" vs. "posse"? +_ LessWrong vs. GreaterWrong for comment links? +_ cut words from descriptions of other posts! (if people want to read them, they can click through ... but on review, these descriptions seem pretty reasonable?) +------- _ cut words from NRx denouncement Jessica discussion +_ later thoughts on jump to evaluation, translating between different groups' language +_ explicitly mention http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/ _ "I" statements _ we can go stronger than "I definitely don't think Yudkowsky _thinks of himself_ as having given up on Speech _in those words_" _ try to clarify Abram's categories view (Michael didn't get it) @@ -44,7 +43,7 @@ _ footnote explaining quibbles? (the first time I tried to write this, I hesitat _ "it was the same thing here"—most readers are not going to understand what I see as the obvious analogy _ first mention of Jack G. should introduce him properly _ link to protest flyer -✓ weird that Kelsey thought the issue was that we were trying to get Yudkowsky to make a statement + pt. 4 edit tier— _ mention Nick Bostrom email scandal (and his not appearing on the one-sentence CAIS statement) @@ -62,7 +61,6 @@ _ "deep causal structure" argument needs to be crystal clear, not sloopy _ it's a relevant detail whether the optimization is coming from Nate _ probably cut the vaccine polarization paragraphs? (overheard at a party is not great sourcing, even if technically admissible) _ elaborate on how 2007!Yudkowsky and 2021!Xu are saying the opposite things if you just take a plain-language reading and consider, not whether individual sentences can be interpreted as "true", but what kind of _optimization_ the text is doing to the behavior of receptive readers -_ Scott got comas right in the same year as "Categories" _ revise reply to Xu _ cite Earthling/postrat sneers _ cite postYud Tweet @@ -74,7 +72,36 @@ _ clarify that Keltham infers there are no mascochists, vs. Word of God _ "Doublethink" ref in Xu discussion should mention that Word of God Eliezerfic clarification that it's not about telling others _ https://www.greaterwrong.com/posts/vvc2MiZvWgMFaSbhx/book-review-the-bell-curve-by-charles-murray/comment/git7xaE2aHfSZyLzL -pt. 6 edit tier— +things to discuss with Michael/Ben/Jessica— +_ Anna on Paul Graham +_ compression of Yudkowsky thinking reasoning wasn't useful +_ Michael's SLAPP against REACH +_ Michael on creepy and crazy men +_ elided Sasha disaster + + +pt. 3–5 prereaders— +_ paid hostile prereader (first choice: April) +_ Iceman +_ Scott? (cursory notification) +_ Kelsey (what was that 1 year statute of limitations about??) +_ Steven Kaas +_ David Xu +_ Ray +_ Ruby +_ Teortaxes? (he might be interested) + + +------- + +later prereaders— +_ afford various medical procedures + +slotted TODO blocks for pt. 6 and dath ilan ancillary— +- Eliezerfic fight conclusion +_ Keltham's section in dath ilan ancillary +_ objections and replies in dath ilan ancillary +_ kitchen knife section in dath ilan ancillary dath ilan ancillary tier— _ Who are the 9 most important legislators called? @@ -86,11 +113,6 @@ _ "telling people things would be meddling" moral needs work; obvious objection _ "obligate" is only Word of God, right?—I should probably cite this -things to discuss with Michael/Ben/Jessica— -_ Anna on Paul Graham -_ compression of Yudkowsky thinking reasoning wasn't useful -_ Michael's SLAPP against REACH -_ Michael on creepy and crazy men ------ @@ -274,38 +296,13 @@ _ backlink only seen an escort once before (#confided-to-) _ backlink Yudkowsky's implicit political concession _ backlink "again grateful" for doctor's notes -pt. 1½ posterity tier (probably not getting published)— -_ "People": say something about the awkward racial/political dynamics of three of my four anecdotes being about black people? Salient because rare? Salient because of my NRx-pilling? -_ Re: on legitimacy and the entrepreneur; or, continuing the attempt to spread my sociopathic awakening onto Scott [pt. 2 somewhere] -_ include Wilhelm "Gender Czar" conversation? [pt. 2] -_ emailing Blanchard/Bailey/Hsu/Lawrence [pt. 2] - - - -terms to explain on first mention— -_ inpatient/outpatient -_ NRx -_ TERF? -_ Civilization (context of dath ilan) -_ Valinor (probably don't name it, actually) -_ "Caliphate" -_ "rationalist" -_ Center for Applied Rationality -_ MIRI -_ "egregore" -_ eliezera - - - - people to consult before publishing, for feedback or right of objection— _ Iceman _ Scott _ hostile prereader (April—if not, J. Beshir, Swimmer, someone else from Alicorner #drama) _ Kelsey (what was that 1 year statute of limitations about??) _ NRx Twitter bro -_ maybe SK (briefly about his name)? (the memoir might have the opposite problem (too long) from my hostile-shorthand Twitter snipes) -_ Megan (that poem could easily be about some other entomologist named Megan) ... I'm probably going to cut that §, though +_ Steven Kaas (briefly about his name)? (the memoir might have the opposite problem (too long) from my hostile-shorthand Twitter snipes) _ David Xu? (Is it OK to name him in his LW account?) _ afford various medical procedures _ Buck? (get the story about Michael being escorted from events) @@ -1413,13 +1410,6 @@ If you _have_ intent-to-inform and occasionally end up using your megaphone to s If you _don't_ have intent-to-inform, but make sure to never, ever say false things (because you know that "lying" is wrong, and think that as long as you haven't "lied", you're in the clear), but you don't feel like you have an obligation to acknowledge criticisms (for example, because you think you and your flunkies are the only real people in the world, and anyone who doesn't want to become one of your flunkies can be disdained as a "post-rat"), that's potentially a much worse situation, because the errors don't cancel. ----- - -bitter comments about rationalists— -https://www.greaterwrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review-posts-need-at-least-2-nominations/comment/d4RrEizzH85BdCPhE -https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare?commentId=Nhv9KPte7d5jbtLBv -https://www.greaterwrong.com/posts/tkuknrjYCbaDoZEh5/could-we-solve-this-email-mess-if-we-all-moved-to-paid/comment/ZkreTspP599RBKsi7 - ------ https://trevorklee.substack.com/p/the-ftx-future-fund-needs-to-slow diff --git a/notes/memoir_wordcounts.csv b/notes/memoir_wordcounts.csv index f3c5bc9..7c12c92 100644 --- a/notes/memoir_wordcounts.csv +++ b/notes/memoir_wordcounts.csv @@ -551,5 +551,7 @@ 10/19/2023,118990,58 10/20/2023,118990,0 10/21/2023,119115,125 -10/22/2023,,0 -10/23/2023,, +10/22/2023,119115,0 +10/23/2023,118860,-255 +10/24/2023,118900,40 +10/25/2023,, diff --git a/notes/post_ideas.txt b/notes/post_ideas.txt index f49d5c0..12dd2bd 100644 --- a/notes/post_ideas.txt +++ b/notes/post_ideas.txt @@ -7,6 +7,9 @@ _ Hrunkner Unnerby and the Shallowness of Progress _ If Clarity Seems Like Death to Them (April 2019–January 2021) _ Agreeing With Stalin in Ways that Exhibit Generally Rationalist Principles (February 2021) _ Zevi's Choice (March 2021–April 2022) + +------- + _ On the Public Anti-Epistemology of dath ilan _ Standing Under the Same Sky (September–December 2022)