From feaf38afe772ce2dfed26f4f881fdaaac23550c7 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Mon, 6 Mar 2023 11:15:26 -0800 Subject: [PATCH] memoir: "Unnatural Categories" publication --- .../if-clarity-seems-like-death-to-them.md | 34 +++++++++++-------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index eceb349..3638821 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -151,6 +151,8 @@ And I was like, I agree that I was unreasonably emotionally attached to that par Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. +In June 2019, + [TODO: * I had posted a linkpost to "No, it's not The Incentives—it's You", which generated a lot of discussion, and Jessica (17 June) identified Ray's comments as the last straw. @@ -330,17 +332,19 @@ _Good_ criticism is hard. _Accurately_ inferring authorial ["intent"](https://ww On 3 November 2019, I received an interesting reply on my philosophy-of-categorization thesis from MIRI researcher Abram Demski. Abram asked: ideally, shouldn't all conceptual boundaries be drawn with appeal-to-consequences? Wasn't the problem just with bad (motivated, shortsighted) appeals to consequences? Agents categorize in order to make decisions. The best classifer for an application depends on the costs and benefits. As a classic example, it's very important for evolved prey animals to avoid predators, so it makes sense for their predator-detection classifiers to be configured such that they jump away from every rustling in the bushes, even if it's usually not a predator. -I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for _what_ variables you paid attention to, to be motivated by consequences. But _given_ the subspace that's relevant to your interests, you want to run an epistemically legitimate clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences. +I had thought of the "false-positives are better than false-negatives when detecting predators" example as being about the limitations of evolution as an AI designer: messy evolved animal brains don't bother to track probability and utility separately the way a cleanly-designed AI could. As I had explained in "... Boundaries?", it made sense for _what_ variables you paid attention to, to be motivated by consequences. But _given_ the subspace that's relevant to your interests, you want to run an "epistemically legitimate" clustering algorithm on the data you see there, which depends on the data, not your values. The only reason value-dependent gerrymandered category boundaries seem like a good idea if you're not careful about philosophy is because it's _wireheading_. Ideal probabilistic beliefs shouldn't depend on consequences. -Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is _because_ probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues _potentially_ arise if you allow consequentialist concerns to affect your epistemics. +Abram didn't think the issue was so clear-cut. Where do "probabilities" come from, in the first place? The reason we expect something like Bayesianism to be an attractor among self-improving agents is because probabilistic reasoning is broadly useful: epistemology can be _derived_ from instrumental concerns. He agreed that severe wireheading issues _potentially_ arise if you allow consequentialist concerns to affect your epistemics. But the alternative view had its own problems. If your AI consists of a consequentialist module that optimizes for utility in the world, and an epistemic module that optimizes for the accuracy of its beliefs, that's _two_ agents, not one: how could that be reflectively coherent? You could, perhaps, bite the bullet here, for fear that consequentialism doesn't tile and that wireheading was inevitable. On this view, Abram explained, "Agency is an illusion which can only be maintained by crippling agents and giving them a split-brain architecture where an instrumental task-monkey does all the important stuff while an epistemic overseer supervises." Whether this view was ultimately tenable or not, this did show that trying to forbid appeals-to-consequences entirely led to strange places. -I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me the way almost everyone else in Berkeley was trying to mess with me.) +I didn't immediately have an answer for Abram, but I was grateful for the engagement. (Abram was clearly addressing the real philosophical issues, and not just trying to mess with me in the sense that almost everyone else in Berkeley was trying to mess with me.) + +------ -Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possibly to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. +Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My _plan_ had been that it should have been possible to tell the story of the Category War while glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water) and not be a mentally-dominated cultist. -The reason it _should_ have been safe to write was because Explaining Things is Good. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully _tell the true story_ about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." +The reason it _should_ have been safe to write was because Explaining Things Is Good. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to carefully _tell the true story_ about why, as a matter of cause-and-effect, I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." So why couldn't I write? Was it that I didn't know how to make "This is not a social attack" credible? Maybe because ... it wasn't true?? I was afraid that telling a story about our leader being intellectually dishonest was "the nuclear option" in a way that I couldn't credibly cancel with "But I'm just telling a true story about a thing that was important to me that actually happened" disclaimers. If you're slowly-but-surely gaining territory in a conventional war, _suddenly_ escalating to nukes seems pointlessly destructive. This metaphor is horribly non-normative ([arguing is not a punishment!](https://srconstantin.github.io/2018/12/15/argue-politics-with-your-best-friends.html) carefully telling a true story _about_ an argument is not a nuke!), but I didn't know how to make it stably go away. @@ -350,7 +354,7 @@ Ben replied that it didn't seem like it was clear to me that I was a victim of s I said I would bite that bullet: yes! Yes, I was trying to figure out whether I was being fair to my abusers, and it was an important question to get right! "Other people's lack of standards harmed me, therefore I don't need to hold myself to standards in my response because I have [extenuating circumstances](https://www.lesswrong.com/posts/XYrcTJFJoYKX2DxNL/extenuating-circumstances)" would be a _lame excuse_. -(This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if that word _ever_ meant anything", and while I agreed that they were pointing to an important way in which things were messed up, I was still sympathetic to the Caliphate-defender's reply that the Vassarite usage of "fraud" was motte-and-baileying between vastly different senses of _fraud_; I wanted to do _more work_ to formulate a _more precise theory_ of the psychology of deception to describe exactly how things are messed up a way that wouldn't be susceptible to the motte-and-bailey charge.) +(This seemed correlated with the recurring stalemated disagreement within our coordination group, where Michael/Ben/Jessica would say, "Fraud, if that word _ever_ meant anything", and while I agreed that they were pointing to an important way in which things were messed up, I was still sympathetic to the Caliphate-defender's reply that this usage of "fraud" was motte-and-baileying between vastly different senses of _fraud_; I wanted to do _more work_ to formulate a _more precise theory_ of the psychology of deception to describe exactly how things are messed up a way that wouldn't be susceptible to the motte-and-bailey charge.) [TODO: Ziz's protest; @@ -669,19 +673,19 @@ https://www.facebook.com/yudkowsky/posts/10158853851009228 _ex cathedra_ statement that gender categories are not an exception to the rule, only 1 year and 8 months after asking for it ] -And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satisfied. I still published "Unnatural Categories Are Optimized for Deception" in January 2021, but if I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war. +And really, that _should_ have been the end of the story. At the trifling cost of two years of my life, we finally got a clarification from Yudkowsky that you can't define the word _woman_ any way you like. I didn't think I was entitled to anything more than that. I was satisfied. If I hadn't been further provoked, I wouldn't have occasion to continue waging the robot-cult religious civil war. -[TODO: psychiatric disaster, breakup with Vassar group, this was really bad for me -[As it is written](https://www.alessonislearned.com/), "A lesson is learned but the damage is irreversible." -] +I still published ["Unnatural Categories Are Optimized for Deception"](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception) in January 2021. -[TODO: "Unnatural Categories Are Optimized for Deception" +I wrote back to Abram Demski regarding his comments from fourteen months before: on further thought, he was right. Even granting my point that evolution didn't figure out how to track probability and utility separately, as Abram had pointed out, the _fact_ that it didn't meant that not tracking it could be an effective AI design. Just because evolution takes shortcuts that human engineers wouldn't didn't mean shortcuts are "wrong". (Rather, there are laws governing which kinds of shortcuts _work_.) -Abram was right +Abram was also right that it would be weird if reflective coherence was somehow impossible: the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'its own' code." In that light, it made sense to regard "have accurate beliefs" as _merely_ a convergent instrumental subgoal, rather than what rationality is about—as sacrilegious as that felt to type. -the fact that it didn't means that not tracking it can be an effective AI design! Just because evolution takes shortcuts that human engineers wouldn't doesn't mean shortcuts are "wrong" (instead, there are laws governing which kinds of shortcuts work). +And yet, somehow, "have accurate beliefs" seemed _more fundamental_ than other convergent instrumental subgoals like "seek power and resources". Could this be made precise? As a stab in the dark, was it possible that the [theorems on the ubiquity of power-seeking](https://www.lesswrong.com/posts/6DuJxY8X45Sco4bS2/seeking-power-is-often-robustly-instrumental-in-mdps) might generalize to a similar conclusion about "accuracy-seeking"? If it _didn't_, the reason why it didn't might explain why accuracy seems more fundamental. -Embedded agency means that the AI shouldn't have to fundamentally reason differently about "rewriting code in some 'external' program" and "rewriting 'my own' code." In that light, it makes sense to regard "have accurate beliefs" as merely a convergent instrumental subgoal, rather than what rationality is about -somehow accuracy seems more fundamental than power or resources ... could that be formalized? +[TODO: psychiatric disaster, breakup with Vassar group, this was really bad for me +[As it is written](https://www.alessonislearned.com/), "A lesson is learned but the damage is irreversible." ] + + -- 2.17.1