From 7c27ac4be245fb9b3ac33d6d2f1cc9514dcae6ec Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Fri, 25 Nov 2022 18:45:26 -0800 Subject: [PATCH] memoir: deep learning blues --- ...xhibit-generally-rationalist-principles.md | 69 +++++++++++-------- 1 file changed, 39 insertions(+), 30 deletions(-) diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index 43cfcec..8a2c902 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -412,21 +412,20 @@ This seemed like a huge and surprising reversal from the position articulated in But this potential unification seemed very dubious to me, especially if "actual" trans women were purported to be "at least 20% of the ones with penises" (!!) in some population. _After it's been pointed out_, it should be a pretty obvious hypothesis that "guy on the Extropians mailing list in 2004 who fantasizes about having a female but 'otherwise identical' copy of himself" and "guy in 2016 Berkeley who identifies as a trans woman" are the _same guy_. -In October 2016, I wrote to Yudkowsky - -(under the [cheerful price protocol](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)) - - [TODO recap cont'd— -* November 2018, "You're not standing in defense of truth ..." -* I spend an absurd amount of effort correcting that, and he eventually clarified in - -* February 2021, "simplest and best proposal" - + * October 2016, I wrote to Yudkowsky noting that he seemed to have made an a massive update and asked to talk about it (for $1000, under the [cheerful price protocol](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price)) + * because of the privacy rules I'm following under this document, can't confirm or deny whether he accepted + * November 2018, "hill of validity" Twitter thread + * with the help of Michael/Sarah/Ben/Jessica, I wrote to him multiple times trying to clarify + * I eventually wrote "Where to Draw the Boundaries?", which includes a verbatim quotes explaining what's wrong with the "it is not a secret" + * we eventually got a clarification in September 2020 + * I was satisfied, and then ... + * February 2021, "simplest and best proposal" + * But this is _still_ wrong, as explained in "Challenges" ] -I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. +At the start, I _never_ expected to end up arguing about something so _trivial_ as the minutiae of pronoun conventions (which no one would care about if historical contingencies of the evolution of the English language hadn't made them a Schelling point and typographical attack surface for things people do care about). The conversation only ended up here after a series of derailings. At the start, I was _trying_ to say something substantive about the psychology of straight men who wish they were women. At this point, the nature of the game is very clear. Yudkowsky wants to make sure he's on peaceful terms with the progressive _Zeitgeist_, subject to the constraint of not saying anything he knows to be false. Meanwhile, I want to actually make sense of what's actually going on in the world as regards to sex and gender, because _I need the correct answer to decide whether or not to cut my dick off_. @@ -446,9 +445,9 @@ Accordingly, I tried the object-level good-faith argument thing _first_. I tried (Obviously, if we're crossing the Rubicon of abandoning the norm of assuming good faith, it needs to be abandoned symmetrically. I _think_ I'm doing a _pretty good_ job of adhering to standards of intellectual conduct and being transparent about my motivations, but I'm definitely not perfect, and, unlike Yudkowsky, I'm not so absurdly mendaciously arrogant to claim "confidence in my own ability to independently invent everything important" (!) about my topics of interest. If Yudkowsky or anyone else thinks they _have a case_ based on my behavior that _I'm_ being culpably intellectually dishonest, they of course have my blessing and encouragement to post it for the audience to evaluate.) -What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing technology because the category encompasses so many high-dimensional details? Not original to me! I [filled in a few technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries compress your anticipated experiences? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) +What makes all of this especially galling is the fact that _all of my heretical opinions are literally just Yudkowsky's opinions from the 'aughts!_ My whole thing about how changing sex isn't possible with existing or forseeable technology because of how complicated humans (and therefore human sex differences) are? Not original to me! I [filled in a few technical details](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#changing-sex-is-hard), but again, this was _in the Sequences_ as ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions). My thing about how you can't define concepts any way you want because there are mathematical laws governing which category boundaries [compress](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length) your [anticipated experiences](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences)? Not original to me! I [filled in](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [a few technical details](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), but [_we had a whole Sequence about this._](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) -Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say, are almost entirely things he _already said_, that anyone could just look up! +Seriously, you think I'm _smart enough_ to come up with all of this indepedently? I'm not! I ripped it all off from Yudkowsky back in the 'aughts _when he still gave a shit about telling the truth_. (Actively telling the truth, and not just technically not lying.) The things I'm hyperfocused on that he thinks are politically impossible to say in the current year, are almost entirely things he _already said_, that anyone could just look up! I guess the point is that the egregore doesn't have the reading comprehension for that?—or rather, the egregore has no reason to care about the past; if you get tagged by the mob as an Enemy, your past statements will get dug up as evidence of foul present intent, but if you're doing good enough of playing the part today, no one cares what you said in 2009? @@ -464,7 +463,7 @@ There are a number of things that could be said to this,[^number-of-things] but [^number-of-things]: Note the striking contrast between ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument), in which the Yudkowsky of 2007 wrote that a campaign manager "crossed the line [between rationality and rationalization] at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it"; and these 2021 Tweets, in which Yudkowsky seems completely nonchalant about "not have been as willing to tweet a truth helping" one side of a cultural dispute, because "this battle just isn't that close to the top of [his] priority list". Well, sure! Any hired campaign manager could say the same: helping the electorate make an optimally informed decision just isn't that close to the top of their priority list, compared to getting paid. - Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems incredibly dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors on both sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to proportionately _more_ errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics! + Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems incredibly dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors on both political sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to proportionately _more_ errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics! The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently [stated by Scott Alexander in November 2014](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (redacting the irrelevant object-level example): @@ -472,21 +471,23 @@ The battle that matters—and I've been _very_ explicit about this, for years— This is a battle between Feelings and Truth, between Politics and Truth. -In order to take the side of Truth, you need to be able to tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him). You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him). And you need to be able to say, in public, that trans women are male and trans men are female _with respect to_ a female/male "sex" concept that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide). +In order to take the side of Truth, you need to be able to tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him). + +You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him). + +And you need to be able to say, in public, that trans women are male and trans men are female _with respect to_ a female/male "sex" concept that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide). If you don't want to say those things because hurting people is wrong, then you have chosen Feelings. Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only speaking in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is great. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. -Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he knows to be unambiguously false. - -But Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608). +Eliezer Yudkowsky ... did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he consciously knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608). In making such boasts, I think Yudkowsky is opting in to being held to higher standards than other mortals. If Scott Alexander gets something wrong when I was trusting him to be right, that's disappointing, but I'm not the victim of false advertising, because Scott Alexander doesn't _claim_ to be anything more than some guy with a blog. If I trusted him more than that, that's on me. -If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, _and_ refuses to acknowledge corrections _and_ keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I _am_ the victim of false advertising. His marketing bluster was optimized to trick people like me, even if my being _dumb enough to believe him_ is one me. +If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, _and_ refuses to acknowledge corrections (in the absence of an unsustainable 21-month nagging campaign) _and_ keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I _am_ the victim of false advertising. His marketing bluster was optimized to trick people like me into trusting him, even if my being _dumb enough to believe him_ is on me. -Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a decent teacher.) +Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a good teacher.) ["Never go in against Eliezer Yudkowsky when anything is on the line."](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to. @@ -494,27 +495,35 @@ Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was n > When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes. -I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show, rather than a misperception on your part. I am left in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) shaking my head sadly about the mortal frailty of my former hero, and shaking my head in disgust at his craven duplicity. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_. +I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) shaking my head sadly about the mortal frailty of my former hero, and shaking my head in disgust at his craven duplicity. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_. ------- -[TODO section existential stakes, cooperation +... except, I would be remiss to condemn Yudkowsky without discussing—potentially mitigating factors. (I don't want to say that whether someone is a fraud should depend on whether there are mitigating factors—rather, I should discuss potential reasons why being a fraud might be the least-bad choice, when faced with a sufficiently desperate situation.) + +So far, I've been writing from the perspective of caring (and expecting Yudkowsky to care) about human rationality as a cause in its own right—about wanting to _make sense_, and wanting to live in a Society that made sense, for its own sake, and not as a convergently instrumental subgoal of saving the world. + +That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science writing was just that good. I did do a little bit of work for the Singularity Institute back in the day (an informal internship in 'aught-nine, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic. + +Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was _not my problem_, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which featured futurist-themed delusions, I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not. + +At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, less crazy people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl[ing] only upon the portion of the activism that would flow to [his] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI, I was still _helping_, a good citizen according to the morality of my tribe. - * At least, that's what I would say, if there weren't—potentially mitigating factors—or at least, reasons why being a fraud might be a good idea (since I don't want the definition of "fraud" to change depending on whether the world is at stake) +But fighting for public epistemology is a long battle; it makes more sense if you have _time_ for it to pay off. Back in the late 'aughts and early 'tens, it looked like we had time. We had these abstract philosophical arguments for worrying about AI, but no one really talked about _timelines_. I believed the Singularity was going to happen in the 21st century, but it felt like something to expect in the _second_ half of the 21st century. - * So far, I've been writing from the perspective of caring about human rationality, [the common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Wanting the world to make sense, and expecting the "rationalists" specifically to _especially_ make sense. +Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution during that time. Yudkowsky seemed particularly spooked by AlphaGo and AlphaZero in 2016–2017. - * But Yudkowsky wrote the Sequences as a recursive step (/2019/Jul/the-source-of-our-power/) for explaining the need for Friendly AI ("Value Is Fragile"); everything done with an ulterior motive has to be done with a pure heart, but the Singularity has always been his no. 1 priority +[TODO: specifically, AlphaGo seemed "deeper" than minimax search so you shouldn't dimiss it as "meh, games", the way it rocketed past human level from self-play https://twitter.com/zackmdavis/status/1536364192441040896] - * Fighting for good epistemology is a long battle, and makes more sense if you have _time_ for it to pay off. +My AlphaGo moment was 5 January 2021, OpenAI's release of [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of that week of January 2021). - * Back in the 'aughts, it looked like we had time. We had abstract high-level arguments to worry about AI, and it seemed like it was going to happen this century, but it felt like the _second_ half of the 21st century. +[TODO: previous AI milestones had seemed dismissible as a mere clever statistics trick; this looked more like "real" understanding, "real" creativity] - * Now it looks like we have—less time? (The 21st century is one-fifth over!) Yudkowsky flipped out about AlphaGo and AlphaZero, and at the time, a lot of people probably weren't worried (board games are a shallow domain), but now that it's happening for "language" (GPT) and "vision" (DALL-E), a lot of people including me are feeling much more spooked (https://twitter.com/zackmdavis/status/1536364192441040896) +[As recently as 2020, I had been daydreaming about about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^altruism] contribution to the world. - * [Include the joke about DALL-E being the most significant news event of that week in January 2021] +[^altruism]: If it seems odd to frame _eugenics_ as "altruistic", translate it as a term of art referring to the component of my actions dedicating to optimizing the future of the world, as opposed to selfishly optimizing my own experiences. - * [As recently as 2020](/2020/Aug/memento-mori/#if-we-even-have-enough-time) I was daydreaming about working for an embryo selection company as part of the "altruistic" (about optimizing the future, rather than about my own experiences) component of my actions—I don't feel like there's time for that anymore +[TODO— * If you have short timelines, and want to maintain influence over what big state-backed corporations are doing, self-censoring about contradicting the state religion makes sense. There's no time to win a culture war; we need to grab hold of the Singularity now!! -- 2.17.1