From: Zack M. Davis Date: Tue, 31 Oct 2023 05:30:11 +0000 (-0700) Subject: memoir: pt. 3 editing (coached edition) X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=87b99b9792a15c42233e1cf7ea5ffbffb9f2de2b;p=Ultimately_Untrue_Thought.git memoir: pt. 3 editing (coached edition) --- diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 93e86a9..23cbc86 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -29,7 +29,7 @@ Ben had previously worked at GiveWell and had written a lot about problems with If there was a real problem, I didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt), rather than having the kind of robust, precise representation a well-designed AI could compute plans with. -Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in Vernor Vinge's _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it was that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world—using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. +Ben called the _gestalt_ he saw the Blight, after the rogue superintelligence in Vernor Vinge's _A Fire Upon the Deep_. The problem wasn't that people were getting dumber; it was that they increasingly behaving in a way that was better explained by their political incentives rather than as decisions based on coherent beliefs about the world—using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints under which only true facts were admissible moves in the game. When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate [had privately clarified that](https://twitter.com/jessi_cata/status/1462454555925434375) the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified." @@ -59,7 +59,7 @@ I may have subconsciously pulled off an interesting political maneuver. In my fi And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feelless aggrieved.) Was I wrong to interpet this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.) -Separately, on 30 April 2019, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, [Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. This culminated in a group conversation with the entire house, which I found disturbing; everyone else seemed to agree on things that I thought were clearly contrary to the spirit of the Sequences. +Separately, one evening in April, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, [Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. This culminated in a group conversation with the entire house, which I found unsettling. (Unfortunately, I didn't take notes and don't remember the details except that I had a sense of everyone else seeming to agree on things that I thought were clearly contrary to the spirit of the Sequences.) [^named-houses]: It was common practice in our subculture to name group houses. My apartment was "We'll Name It Later." @@ -101,7 +101,7 @@ Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/ I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether or not I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake even after it was pointed out, then the "rationalists" were _worse than useless_ to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of a reference class that included AI researchers). -Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's easier to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on a canonical reading list, for obvious reasons.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). +Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). [^plausibly]: I was still deep enough in my hero-worship that I wrote "plausibly". Today, I would not consider the adverb necessary. @@ -129,7 +129,7 @@ Math and Wellness Month ended up being mostly a failure: the only math I ended u In June 2019, I made [a linkpost on _Less Wrong_](https://www.lesswrong.com/posts/5nH5Qtax9ae8CQjZ9/tal-yarkoni-no-it-s-not-the-incentives-it-s-you) to Tal Yarkoni's ["No, It's Not The Incentives—It's you"](https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/), about how professional scientists should stop using career incentives as an excuse for doing poor science. It generated a lot of discussion. -In an email (Subject: "LessWrong.com is dead to me"), Jessica identified _Less Wrong_ moderator [Ray Arnold's comments](https://www.greaterwrong.com/posts/5nH5Qtax9ae8CQjZ9/no-it-s-not-the-incentives-it-s-you/comment/vPj9E9iqXjnNdyhob) as her last straw: +In an email (Subject: "LessWrong.com is dead to me"), Jessica identified _Less Wrong_ moderator [Ray Arnold's comments](https://www.greaterwrong.com/posts/5nH5Qtax9ae8CQjZ9/no-it-s-not-the-incentives-it-s-you/comment/vPj9E9iqXjnNdyhob) as her last straw. Jessica wrote: > LessWrong.com is a place where, if the value of truth conflicts with the value of protecting elites' feelings and covering their asses, the second value will win. > @@ -189,7 +189,7 @@ I still sympathized with the pushback from Caliphate supporters against using "s Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you?" Investigations of financial fraud focused on promises about money being places being false because the money was not in fact in those places, rather than the psychological minutiæ of the perp's exact motives. -I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is probably going to have unlucky analogues in nearby possible worlds who are guilty of vehicular manslaughter.) The manslaughter case was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the weak principle that intent matters, sometimes. +I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is taking on some nonzero risk of committing vehicular manslaughter.) The manslaughter case was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the weak principle that intent matters, sometimes. [^manslaughter-disanalogy]: For one extremely important disanalogy, perps don't _gain_ from committing manslaughter. @@ -229,7 +229,7 @@ I was pretty horrified by the extent to which _Less Wrong_ moderators (!!) seeme An in-person meeting was arranged on 23 July 2019 at the _Less Wrong_ office, with Ben, Jessica, me, and most of the _Less Wrong_ team (Ray, Ruby, Oliver Habryka, Vaniver, Jim Babcock). I don't have notes and don't really remember what was discussed in enough detail to faithfully recount it.[^memory] I ended up crying at one point and left the room for a while. -[^memory]: An advantage of mostly living on the internet is that I have _logs_ of the important things; I'm only able to tell this Whole Dumb Story with as much fidelity as I am, because for most of it, I can go back and read the emails and chatlogs from the time. Now that [audio transcription has fallen to AI](https://openai.com/blog/whisper/), should I be recording more real-life conversations?? In the case of this meeting, supposedly one of the _Less Wrong_ guys was recording, but no one had it when I asked in October 2022. +[^memory]: An advantage of mostly living on the internet is that I have logs of the important things. I'm only able to tell this Whole Dumb Story with as much fidelity as I am, because for most of it, I can go back and read the emails and chatlogs from the time. Now that [audio transcription has fallen to AI](https://openai.com/blog/whisper/), maybe I should be recording more real-life conversations? In the case of this meeting, supposedly one of the _Less Wrong_ guys was recording, but no one had it when I asked in October 2022. The next day, I asked Ben and Jessica for their takeaways via email (Subject: "peace talks outcome?"). Jessica said that I was a "helpful emotionally expressive and articulate victim", that there seemed to be a consensus that it would be better if people like me could be warned somehow that _Less Wrong_ wasn't doing the general sanity-maximization thing anymore. (Because community leaders were willing to sacrifice, for example, ability to discuss non-AI heresies, in order to focus on sanity about AI in particular while maintaining enough mainstream acceptability and power.) @@ -265,9 +265,9 @@ I considered this an insightful observation about a way in which I'm socially re Empirically, not right! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are [mistakenly](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) failing to live up to the narrative" and "[Everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing. -It was the same thing here. Kelsey said that it was completely predictable that Yudkowsky wouldn't make a public statement, even one as uncontroversial as "category boundaries should be drawn for epistemic and not instrumental reasons", because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion on 30 April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[^statement] +It was the same thing here. Kelsey said that it was completely predictable that Yudkowsky wouldn't make a public statement, even one as uncontroversial as "category boundaries should be drawn for epistemic and not instrumental reasons", because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion in April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[^statement] -[^statement]: I thought it was odd that Kelsey seemed to think the issue was that me and my allies were pressuring Yudkowsky to make a public statement, which he never does. From our perspective, the issue was that he _had_ made a statement, and it was wrong. +[^statement]: I thought it was odd that Kelsey seemed to think the issue was that me and my allies were pressuring Yudkowsky to make a public statement, which he supposedly never does. From our perspective, the issue was that he _had_ made a statement, and it was wrong. Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not optimized to respond to the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But that being the case, I had no reason to care what Eliezer Yudkowsky said, because not-provoking-SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise _to me_, even if Kelsey knew better. @@ -405,7 +405,7 @@ Appreciation of this obvious normative ideal seems strikingly absent from Yudkow The "Reducing Negativity" post also warns against the failure mode of attempted "author telepathy": attributing bad motives to authors and treating those attributions as fact without accounting for uncertainty or distinguishing observations from inferences. I should be explicit, then: when I say negative things about Yudkowsky's state of mind, like it's "as if he's given up on the idea that reasoning in public is useful or possible", that's a probabilistic inference, not a certain observation. -But I think making probabilistic inferences is ... fine? The sentence "Credibly helpful unsolicited criticism should be delivered in private" sure does look to me like text that's likely to have been generated by a state of mind that doesn't believe that reasoning in public is useful or possible.[^criticism-inference] Someone who did believe in public reason would have noticed that criticism has information content whose public benefits might outweigh its potential to harm an author's reputation or feelings. If you think I'm getting this inference wrong, feel free to let me _and other readers_ know why in the comments. +But I think making probabilistic inferences is ... fine? The sentence "Credibly helpful unsolicited criticism should be delivered in private" sure does look to me like text that's likely to have been generated by a state of mind that doesn't believe that reasoning in public is useful or possible.[^criticism-inference] I think that someone who did believe in public reason would have noticed that criticism has information content whose public benefits might outweigh its potential to harm an author's reputation or feelings. If you think I'm getting this inference wrong, feel free to let me _and other readers_ know why in the comments. [^criticism-inference]: More formally, I'm claiming that the [likelihood ratio](https://arbital.com/p/likelihood_ratio/) P(wrote that sentence|doesn't believe in public reason)/P(wrote that sentence|does believe in public reason) is greater than one. @@ -435,7 +435,7 @@ Ben replied that it didn't seem like it was clear to me that I was a victim of s I said I would bite that bullet: yes! Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a lame excuse. -This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false-representations-optimized-to-move-resources, I was still sympathetic to the Caliphate-defender's reply that this usage of "fraud" was motte-and-baileying between different senses of _fraud_. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.) I wanted to do _more work_ to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge. +This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if the word ever meant anything", and while I agreed that they were pointing to an important pattern of false representations optimized to move resources, I was still sympathetic to the Caliphate-defender's reply that this usage of "fraud" was motte-and-baileying between different senses of _fraud_. (Most people would say that the things we were alleging MIRI and CfAR had done wrong were qualitatively different from the things Enron and Bernie Madoff had done wrong.) I wanted to do _more work_ to formulate a more precise theory of the psychology of deception to describe exactly how things were messed up a way that wouldn't be susceptible to the motte-and-bailey charge. ------- @@ -447,7 +447,7 @@ The main relevance of this incident to my Whole Dumb Story is that Ziz's memoir -------- -I had an interesting interaction with [Somni](https://somnilogical.tumblr.com/), one of the "Meeker Four"—presumably out on bail at this time?—on 12 December 2019. +I had an interesting interaction with [Somni](https://somnilogical.tumblr.com/), one of the "Meeker Four"—presumably out on bail at this time?—on Discord on 12 December 2019. I told her, from a certain perspective, it's surprising that she spent so much time complaining about CfAR, Anna Salamon, Kelsey Piper, _&c._, but _I_ seemed to get along fine with her—because naïvely, one would think that my views were so much worse. Was I getting a pity pass because she thought false consciousness was causing me to act against my own transfem class interests? Or what? @@ -465,7 +465,7 @@ I had a phone call with Michael in which he took issue with Anna having describe > I said if they were going to defend a right to be attacking me on some level, and treat fighting back as new aggression and cause to escalate, I would not at any point back down, and if our conflicting definitions of the ground state where no further retaliation was necessary meant we were consigned to a runaway positive feedback loop of revenge, so be it. And if that was true, we might as well try to kill each other right then and there. - Talking about murder hypothetically as the logical game-theoretic consequence of a revenge spiral isn't the same thing as directly threatening to kill someone. I wasn't sure what exact words Anna had used in her alleged paraphrase; Michael didn't remember the context when I asked him later. + Talking about murder hypothetically as the logical game-theoretic consequence of a revenge spiral isn't the same thing as directly threatening to kill someone. (In context, it's calling a bluff: Ziz is saying that if Gwen was asserting a right to mooch off Ziz, then they might as well kill each other; by _modus tollens_, if they don't kill each other, then Gwen's assertion wasn't serious.) I wasn't sure what exact words Anna had used in her alleged paraphrase; Michael didn't remember the context when I asked him later. I told Michael that this made me think I might need to soul-search about having been complicit with injustice, but I couldn't clearly articulate why. @@ -601,11 +601,11 @@ Or, I pointed out, (c) I had ceded the territory of the interior of my own mind In January 2020, Michael told me that he had changed his mind about gender and the philosophy of language. We talked about it on the phone. He said that the philosophy articulated in ["A Human's Guide to Words"](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb) was inadequate for politicized environments where our choice of ontology is constrained. If we didn't know how to coin a new third gender, or teach everyone the language of "clusters in high-dimensional configuration space", our actual choices for how to think about trans women were basically three: creepy men (the TERF narrative), crazy men (the medical model), or a protected class of actual woman.[^reasons-not-to-carve] -[^reasons-not-to-carve]: I had identified three classes of reasons not to carve reality at the joints: [coordination (wanting everyone to use the same definitions)](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), wireheading (making the map look good, at the expense of it failing to reflect the territory), and war (sabotaging someone else's map to make them do what you want). This would fall under "coordination". +[^reasons-not-to-carve]: I had identified three classes of reasons not to carve reality at the joints: [coordination (wanting everyone to use the same definitions)](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests), wireheading (making the map look good, at the expense of it failing to reflect the territory), and war (sabotaging someone else's map to make them do what you want). This would fall under "coordination" insofar as Michael's proposal was motivated by the need to use the same categories as everyone else. (Although you could also make a case for "war" insofar as the civil-rights model winning entailed that adherents of the TERF or medical models must lose.) According to Michael, while "trans women are real women" was a lie (in the sense that he agreed that me and Jessica and Ziz were not part of the natural cluster of biological females), it was _also_ the case that "trans women are not real women" was a lie (in the sense that the "creepy men" and "crazy men" stories were wrong). "Trans women are women" could be true in the sense that truth is about processes that create true maps, such that we can choose the concepts that allow discourse and information-flow. If the "creepy men" and "crazy men" stories are a cause of silencing, then—under present conditions—we had to chose the "protected class" story in order for people like Ziz to not be silenced. -My response (more vehemently when thinking on it a few hours later) was that this was a _garbage bullshit_ appeal to consequences. If I wasn't going to let Ray Arnold get away with "we are better at seeking truth when people feel Safe", I shouldn't let Michael get away with "we are better at seeking truth when people aren't Oppressed". Maybe the wider world was ontology-constrained to those three choices, but I was aspiring to higher nuance in my writing, and it seemed to be working pretty well. +My response (more vehemently when thinking on it a few hours later) was that this was a _garbage bullshit_ appeal to consequences. If I wasn't going to let Ray Arnold get away with "we are better at seeking truth when people feel Safe", I shouldn't let Michael get away with "we are better at seeking truth when people aren't oppressed". Maybe the wider world was ontology-constrained to those three choices, but I was aspiring to higher nuance in my writing, and it seemed to be working pretty well. "Thanks for being principled," he replied. (He had a few more sentences about the process _vs._ conclusion point being important to his revised-for-politics philosophy of language, but we didn't finish the debate.) @@ -865,30 +865,30 @@ There it is! A clear _ex cathedra_ statement that gender categories are not an e I wrote to Michael, Ben, Jessica, Sarah, and "Riley", thanking them for their support. After successfully bullying Scott and Eliezer into clarifying, I was no longer at war with the robot cult and feeling a lot better (Subject: "thank-you note (the end of the Category War)"). -I had a feeling, I added, that Ben might be disappointed with the thank-you note insofar as it could be read as me having been "bought off" rather than being fully on the side of clarity-creation. But not being at war actually made it emotionally easier to do clarity-creation writing. Now I would be able to do it in the spirit of "Here's what I think the thing is actually doing" rather than the spirit of "I hate you lying motherfuckers _so much_. [It, it—the fe—it, flame—flames. Flames—on the side of my face.](https://www.youtube.com/watch?v=nrqxmQr-uto)" +I had a feeling, I added, that Ben might be disappointed with the thank-you note insofar as it could be read as me having been "bought off" rather than being fully on the side of clarity-creation. But not being at war actually made it emotionally easier to do clarity-creation writing. Now I would be able to do it in a contemplative spirit of "Here's what I think the thing is actually doing" rather than in hatred with [flames on the side of my face](https://www.youtube.com/watch?v=nrqxmQr-uto&t=112s). ----- -If this were an autobiography (which existed to tell my life story) rather than a topic-focused memoir (which exists because my life happens to contain this Whole Dumb Story which bears on matters of broader interest, even if my life would not otherwise be interesting), there's a dramatic episode that would fit here chronologically. +There's a dramatic episode that would fit here chronologically if this were an autobiography (which existed to tell my life story), but since this is a topic-focused memoir (which exists because my life happens to contain this Whole Dumb Story which bears on matters of broader interest, even if my life would not otherwise be interesting), I don't want to spend more wordcount than is needed to briefly describe the essentials. -I was charged by members of the "Vassarite" clique in New York with the duty of taking care of a mentally-ill person at my house on 18 December 2020. (We did not trust the ordinary psychiatric system to act in patients' interests.) I apparently did a poor job, and ended up saying something callous on the care team group chat after a stressful night, which led to a chaotic day on the nineteenth, and an ugly falling-out between me and the group. In the interests of brevity and the privacy of the person we were trying to help, I think it's better that I don't expend the wordcount to give you a play-by-play. The details aren't particularly of public interest. +I was charged by members of the "Vassarite" clique with the duty of taking care of a mentally-ill person at my house on 18 December 2020. (We did not trust the ordinary psychiatric system to act in patients' interests.) I apparently did a poor job, and ended up saying something callous on the care team group chat after a stressful night, which led to a chaotic day on the nineteenth, and an ugly falling-out between me and the group. In the interests of brevity and the privacy of the person we were trying to help, I think it's better that I don't give you a play-by-play. The details aren't particularly of public interest. My poor performance during this incident [weighs on my conscience](/2020/Dec/liability/) particularly because I had previously been in the position of being crazy and benefitting from the help of my friends (including many of the same people involved in this incident) rather than getting sent back to psychiatric prison ("hospital", they call it a "hospital"). Of all people, I had a special debt to "pay it forward", and one might have hoped that I would also have special skills, that remembering being on the receiving end of a psychiatric tripsitting operation would help me know what to do on the giving end. Neither of those panned out. -Some might appeal to the proverb, "All's well that ends well", noting that the person in trouble ended up being okay, and that, while the stress contributed to me having a relapse of some of my own psychological problems on the night of the nineteenth and in the following weeks, I ended up being okay, too (at the cost of missing a week of my dayjob and giving up caffeine permanently). I am instead inclined to dwell on [another proverb](https://www.alessonislearned.com/), "A lesson is learned but the damage is irreversible." +Some might appeal to the proverb, "All's well that ends well", noting that the person in trouble ended up being okay, and that, while the stress contributed to me having a relapse of some of my own psychological problems on the night of the nineteenth and in the following weeks, I ended up being okay, too. I am instead inclined to dwell on [another proverb](https://www.alessonislearned.com/), "A lesson is learned but the damage is irreversible." ----- I published ["Unnatural Categories Are Optimized for Deception"](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception) in January 2021. -I wrote back to Abram Demski regarding his comments from fourteen months before: on further thought, he was right. Even granting my point that evolution didn't figure out how to track probability and utility separately, as Abram had pointed out, the _fact_ that it didn't meant that not tracking it could be an effective AI design. Just because evolution takes shortcuts that human engineers wouldn't didn't mean shortcuts are "wrong". (Rather, there are laws governing which kinds of shortcuts _work_.) +I wrote back to Abram Demski regarding his comments from fourteen months before: on further thought, he was right. Even granting my point that evolution didn't figure out how to track probability and utility separately, as Abram had pointed out, the fact that it didn't meant that not tracking it could be an effective AI design. Just because evolution takes shortcuts that human engineers wouldn't didn't mean shortcuts are "wrong". (Rather, there are laws governing which kinds of shortcuts work.) -Abram was also right that it would be weird if reflective coherence was somehow impossible: the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'its own' code." In that light, it made sense to regard "have accurate beliefs" as _merely_ a convergent instrumental subgoal, rather than what rationality is about—as sacrilegious as that felt to type. +Abram was also right that it would be weird if reflective coherence was somehow impossible: the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'its own' code." In that light, it made sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about—as sacrilegious as that felt to type. -And yet, somehow, "have accurate beliefs" seemed _more fundamental_ than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the [theorems on the ubiquity of power-seeking](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps) might generalize to a similar conclusion about "accuracy-seeking"? If it _didn't_, the reason why it didn't might explain why accuracy seems more fundamental. +And yet, somehow, "have accurate beliefs" seemed more fundamental than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the [theorems on the ubiquity of power-seeking](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps) might generalize to a similar conclusion about "accuracy-seeking"? If it didn't, the reason why it didn't might explain why accuracy seems more fundamental. ------ -And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satisfied. If I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war. +And really, that should have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satisfied. If I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war. It turned out that I would have occasion to continue waging the robot-cult religious civil war. (To be continued.) diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 55749b5..9ede283 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -20,17 +20,24 @@ pt. 3 edit tier— ✓ we can go stronger than "I definitely don't think Yudkowsky thinks of himself ✓ cut words from December 2019 blogging spree ✓ mention "Darkest Timeline" and Skyrms somewhere - -pt. 3 content fills— +✓ "Not the Incentives"—rewrite given that I'm not shielding Ray ✓ the skeptical family friend's view ✓ in a footnote, defend the "cutting my dick off" rhetorical flourish +✓ "admit that you are in fact adding a bit of autobiography to the memoir." +✓ anyone who drives at all +✓ clarify that can't remember details: "everyone else seemed to agree on things" +✓ "It's not just talking hypothetically, it is specifically calling a bluff, the point of the hypothetical" +✓ the history of civil rights +✓ it's not obvious why you can't recommend "What You Can't Say" +✓ Ben on "locally coherent coordination" don't unattributedly quote + +_ "it might matter timelessly" → there are people with AI chops who are PC (/2017/Jan/from-what-ive-tasted-of-desire/) _ confusing people and ourselves about what the exact crime is _ footnote explaining quibbles on clarification -_ FTX validated Ben's view of EA!! +_ FTX validated Ben's view of EA!! ("systematically conflating corruption, accumulation of dominance, and theft, with getting things done") +_ "your failure to model social reality is believing people when they claim noble motives" +_ hint at Vanessa being trans -pt. 3 minor— -✓ "Not the Incentives"—rewrite given that I'm not shielding Ray -_ Ben on "locally coherent coordination": use direct quotes for Ben's language—maybe rewrite in my own language (footnote?) as an understanding test _ do I have a better identifier than "Vassarite"? _ maybe I do want to fill in a few more details about the Sasha disaster, conditional on what I end up writing regarding Scott's prosecution?—and conditional on my separate retro email—also the Zolpidem thing _ link to protest flyer @@ -50,6 +57,7 @@ _ mention Nick Bostrom email scandal (and his not appearing on the one-sentence _ revise and cut words from "bad faith" section since can link to "Assume Bad Faith" _ cut words from January 2020 Twitter exchange (after war criminal defenses) _ everyone *who matters* prefers to stay on the good side +_ if you only say good things about Republican candidates pt. 5 edit tier— _ quote specific exchange where I mentioned 10,000 words of philosophy that Scott was wrong—obviously the wrong play @@ -104,6 +112,9 @@ _ Keltham's section in dath ilan ancillary _ objections and replies in dath ilan ancillary _ kitchen knife section in dath ilan ancillary +pt. 6 edit tier— +_ coach suggests, "Arguably this doesn't count as punishment for the same reason we don't say we're "punishing" a rabid animal by euthanizing it: it's not about disincentivizing the behaviour." + dath ilan ancillary tier— _ Who are the 9 most important legislators called? _ collect Earth people sneers