From 9dcb943731a081641fe0e5f8bed1ee834fe87705 Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Fri, 3 Nov 2023 15:26:40 -0700 Subject: [PATCH] memoir: pt. 5 edit sweep ... --- ...xhibit-generally-rationalist-principles.md | 2 +- content/drafts/zevis-choice.md | 134 +++++++----------- notes/memoir-sections.md | 44 +++++- notes/memoir_wordcounts.csv | 3 +- 4 files changed, 94 insertions(+), 89 deletions(-) diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index 7a74617..5275d4f 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -69,7 +69,7 @@ But _trans!_ We have plenty of trans people to trot out as a shield to definitiv Even the haters grudgingly give Alexander credit for "... Not Man for the Categories": ["I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least awknowledging you were wrong"](https://archive.is/SlJo1), wrote one. -Under these circumstances, dethroning the supremacy of gender identity ideology is politically impossible. All our [Overton margin](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) is already being spent somewhere else; sanity on this topic is our [dump stat](https://tvtropes.org/pmwiki/pmwiki.php/Main/DumpStat). +Under these circumstances, dethroning the supremacy of gender identity ideology is politically impossible. All our [Overton margin](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting) is already being spent somewhere else; sanity on this topic is our [dump stat](https://tvtropes.org/pmwiki/pmwiki.php/Main/DumpStat). But this being the case, _I have no reason to participate in the cover-up_. What's in it for me? Why should I defend my native subculture from external attack, if the defense preparations themselves have already rendered it uninhabitable to me? diff --git a/content/drafts/zevis-choice.md b/content/drafts/zevis-choice.md index 2daec4d..93a52fe 100644 --- a/content/drafts/zevis-choice.md +++ b/content/drafts/zevis-choice.md @@ -11,19 +11,19 @@ Status: draft ... except, I would be remiss to condemn Yudkowsky without discussing—potentially mitigating factors. (I don't want to say that whether someone is a fraud should depend on whether there are mitigating factors—rather, I should discuss potential reasons why being a fraud might be the least-bad choice, when faced with a sufficiently desperate situation.) -So far, I've been writing from the perspective of caring (and expecting Yudkowsky to care) about human rationality as a cause in its own right—about wanting to _make sense_, and wanting to live in a Society that made sense, for its own sake, and not as a convergently instrumental subgoal of saving the world. +So far, I've been writing from the perspective of caring (and expecting Yudkowsky to care) about human rationality as a cause in its own right—about wanting to make sense, and wanting to live in a Society that made sense, for its own sake, not as a convergently instrumental subgoal of saving the world. -That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science blogging was just that good. I did do a little bit of work for the Singularity Institute back in the day (an informal internship in 2009, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic. +That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science blogging was just that good. I did do a little bit of work for the Singularity Institute back in the day (a "we don't pay you, but you can sleep in the garage" internship in 2009, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna Salamon, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic. -Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was not my problem, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which [featured](http://zackmdavis.net/blog/2013/03/religious/) [futurist](http://zackmdavis.net/blog/2013/04/prodrome/)-[themed](http://zackmdavis.net/blog/2013/05/relativity/) [delusions](http://zackmdavis.net/blog/2013/05/relevance/), I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not. +Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was not my problem, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which [featured](http://zackmdavis.net/blog/2013/03/religious/) [futurist](http://zackmdavis.net/blog/2013/04/prodrome/)-[themed](http://zackmdavis.net/blog/2013/05/relativity/) [delusions](http://zackmdavis.net/blog/2013/05/relevance/), I wrote to Anna, Michael Vassar, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore". When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not. -At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, more emotionally-stable people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl\[ing\] only upon the portion of the activism that would flow to \[his\] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI specifically, I was still _helping_, a good citizen according to the morality of my tribe. +At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, more emotionally-stable people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for Friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl\[ing\] only upon the portion of the activism that would flow to \[his\] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI specifically, I was still _helping_, a good citizen according to the morality of my tribe. -But fighting for public epistemology is a long battle; it makes more sense if you have _time_ for it to pay off. Back in the late 'aughts and early 'tens, it looked like we had time. We had these abstract philosophical arguments for worrying about AI, but no one really talked about _timelines_. I believed the Singularity was going to happen in the 21st century, but it felt like something to expect in the _second_ half of the 21st century. +But fighting for public epistemology is a long battle; it makes more sense if you have time for it to pay off. Back in the late 'aughts and early 'tens, it looked like we had time. We had these abstract philosophical arguments for worrying about AI, but no one really talked about timelines. I believed the Singularity was going to happen in the 21st century, but it felt like something to expect in the second half of the 21st century. -Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution.[^second-half] Yudkowsky seemed particularly [spooked by AlphaGo](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=gQzA8a989ZyGvhWv2) [and AlphaZero](https://intelligence.org/2017/10/20/alphago/) in 2016–2017, not because superhuman board game players were dangerous, but because of what it implied about the universe of algorithms. +Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution.[^second-half] Yudkowsky seemed particularly [spooked by AlphaGo](https://www.lesswrong.com/posts/7MCqRnZzvszsxgtJi/christiano-cotra-and-yudkowsky-on-ai-progress?commentId=gQzA8a989ZyGvhWv2) [and AlphaZero](https://intelligence.org/2017/10/20/alphago/) in 2016–2017, not because superhuman board game players were themselves dangerous, but because of what it implied about the universe of algorithms. -In part of the Sequences, Yudkowsky had been [dismissive of people who aspired to build AI without understanding how intelligence works](https://www.lesswrong.com/posts/fKofLyepu446zRgPP/artificial-mysterious-intelligence)—for example, by being overly impressed by the [surface analogy](https://www.lesswrong.com/posts/6ByPxcGDhmx74gPSm/surface-analogies-and-deep-causes) between artificial neural networks and the brain. He conceded the possibility of brute-forcing AI (if natural selection had eventually gotten there with no deeper insight, so could we) but didn't consider it a default and especially not a desirable path. (["If you don't know how your AI works, that is not good. It is bad."](https://www.lesswrong.com/posts/fKofLyepu446zRgPP/artificial-mysterious-intelligence)) +In the Sequences, Yudkowsky had been [dismissive of people who aspired to build AI without understanding how intelligence works](https://www.lesswrong.com/posts/fKofLyepu446zRgPP/artificial-mysterious-intelligence)—for example, by being overly impressed by the [surface analogy](https://www.lesswrong.com/posts/6ByPxcGDhmx74gPSm/surface-analogies-and-deep-causes) between artificial neural networks and the brain. He conceded the possibility of brute-forcing AI (if natural selection had eventually gotten there with no deeper insight, so could we) but didn't consider it a default and especially not a desirable path. (["If you don't know how your AI works, that is not good. It is bad."](https://www.lesswrong.com/posts/fKofLyepu446zRgPP/artificial-mysterious-intelligence)) These days, it's increasingly looking like making really large neural nets ... [actually works](https://www.gwern.net/Scaling-hypothesis)?—which seems like bad news; if it's "easy" for non-scientific-genius engineering talent to shovel large amounts of compute into the birth of powerful minds that we don't understand and don't know how to control, then it would seem that the world is soon to pass outside of our understanding and control. @@ -31,11 +31,11 @@ These days, it's increasingly looking like making really large neural nets ... [ But if ninety years is urgent, what about ... nine? Looking at what deep learning can do in 2023, the idea of Singularity 2032 doesn't seem self-evidently absurd in the way that Singularity 2019 seemed absurd in 2010 (correctly, as it turned out). -My AlphaGo moment was 5 January 2021, when OpenAI released [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of [that week in January 2021](https://en.wikipedia.org/wiki/January_6_United_States_Capitol_attack)). Previous AI milestones, like GANs for a _fixed_ image class, were easier to dismiss as clever statistical tricks. If you have thousands of photographs of people's faces, I didn't feel surprised that some clever algorithm could "learn the distribution" and spit out another sample; I don't know the _details_, but it doesn't seem like scary "understanding." DALL-E's ability to _combine_ concepts—responding to "an armchair in the shape of an avacado" as a novel text prompt, rather than already having thousands of examples of avacado-chairs and just spitting out another one of those—viscerally seemed more like "real" creativity to me, something qualitatively new and scary.[^qualitatively-new] +My AlphaGo moment was 5 January 2021, when OpenAI released [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of [that week in January 2021](https://en.wikipedia.org/wiki/January_6_United_States_Capitol_attack)). Previous AI milestones, like [GANs](https://en.wikipedia.org/wiki/Generative_adversarial_network) for a fixed image class, felt easier to dismiss as clever statistical tricks. If you have thousands of photographs of people's faces, I didn't feel surprised that some clever algorithm could "learn the distribution" and spit out another sample; I don't know the details, but it doesn't seem like scary "understanding." DALL-E's ability to combine concepts—responding to "an armchair in the shape of an avacado" as a novel text prompt, rather than already having thousands of examples of avacado-chairs and just spitting out another one of those—viscerally seemed more like "real" creativity to me, something qualitatively new and scary.[^qualitatively-new] [^qualitatively-new]: By mid-2022, DALL-E 2 and Midjourney and Stable Diffusion were generating much better pictures, but that wasn't surprising. Seeing AI being able to do a thing at all is the model update; AI being able to do the thing much better 18 months later feels "priced in." -[As recently as 2020, I had been daydreaming about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^eugenics-altruism] contribution to the great common task. Existing companies working on embryo selection [boringly](https://archive.is/tXNbU) [market](https://archive.is/HwokV) their services as being about promoting health, but [polygenic scores should work as well for maximizing IQ as they do for minimizing cancer risk](https://www.gwern.net/Embryo-selection).[^polygenic-score] Making smarter people would be a transhumanist good in its own right, and [having smarter biological humans around at the time of our civilization's AI transition](https://www.lesswrong.com/posts/2KNN9WPcyto7QH9pi/this-failing-earth) would give us a better shot at having it go well.[^ai-transition-go-well] +[As recently as 2020, I had been daydreaming about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^eugenics-altruism] contribution to the Great Common Task. Existing companies working on embryo selection [boringly](https://archive.is/tXNbU) [market](https://archive.is/HwokV) their services as being about promoting health, but [polygenic scores should work as well for maximizing IQ as they do for minimizing cancer risk](https://www.gwern.net/Embryo-selection).[^polygenic-score] Making smarter people would be a transhumanist good in its own right, and [having smarter biological humans around at the time of our civilization's AI transition](https://www.lesswrong.com/posts/2KNN9WPcyto7QH9pi/this-failing-earth) would give us a better shot at having it go well.[^ai-transition-go-well] [^eugenics-altruism]: If it seems odd to frame _eugenics_ as "altruistic", translate it as a term of art referring to the component of my actions dedicating to optimizing the world at large, as contrasted to "selfishly" optimizing my own experiences. @@ -43,15 +43,15 @@ My AlphaGo moment was 5 January 2021, when OpenAI released [DALL-E](https://open [^ai-transition-go-well]: Natural selection eventually developed intelligent creatures, but evolution didn't know what it was doing and was not foresightfully steering the outcome in any particular direction. The more humans know what we're doing, the more our will determines the fate of the cosmos; the less we know what we're doing, the more our civilization is just another primordial soup for the next evolutionary transition. -But pushing on embryo selection only makes sense as an intervention for optimizing the future if AI timelines are sufficiently long, and the breathtaking pace (or too-fast-to-even-take-a-breath pace) of the deep learning revolution is so much faster than the pace of human generations, that it's starting to look unlikely that we'll get that much time. If our genetically uplifted children would need at least twenty years to grow up to be productive alignment researchers, but unaligned AI is [on track to end the world in twenty years](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines), we would need to start having those children _now_ in order for them to make any difference at all. +But pushing on embryo selection only makes sense as an intervention for optimizing the future if AI timelines are sufficiently long, and the breathtaking pace (or too-fast-to-even-take-a-breath pace) of the deep learning revolution is so much faster than the pace of human generations, that it's looking unlikely that we'll get that much time. If our genetically uplifted children would need at least twenty years to grow up to be productive alignment researchers, but unaligned AI is [on track to end the world in twenty years](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines), we would need to start having those children _now_ in order for them to make any difference at all. -[It's ironic that "longtermism" got traction as the word for the cause area of benefitting the far future](https://applieddivinitystudies.com/longtermism-irony/), because the decision-relevant beliefs of most of the people who think about the far future, end up working out to extreme _short_-termism. Common-sense longtermism—a longtermism that assumed there's still going to be a recognizable world of humans in 2123—_would_ care about eugenics, and would be willing to absorb political costs today in order to fight for a saner future. The story of humanity would not have gone _better_ if Galileo had declined to publish for pre-emptive fear of the Inquisition. +[It's ironic that "longtermism" got traction as the word for the cause area of benefitting the far future](https://applieddivinitystudies.com/longtermism-irony/), because the decision-relevant beliefs of most of the people who think about the far future, end up working out to extreme short-termism. Common-sense longtermism—a longtermism that assumed there's still going to be a recognizable world of humans in 2123—would care about eugenics, and would be willing to absorb political costs today in order to fight for a saner future. The story of humanity would not have gone better if Galileo had declined to publish for pre-emptive fear of the Inquisition. -But if you think the only hope for there _being_ a future flows through maintaining influence over what large tech companies are doing as they build transformative AI, declining to contradict the state religion makes more sense—if you don't have _time_ to win a culture war, because you need to grab hold of the Singularity (or perform a [pivotal act](https://arbital.com/p/pivotal/) to prevent it) _now_. If the progressive machine marks you as a transphobic bigot, the machine's functionaries at OpenAI or Meta AI Research are less likely to listen to you when you explain why [their safety plan](https://openai.com/blog/our-approach-to-alignment-research/) won't work, or why they should have a safety plan at all. +But if you think the only hope for there _being_ a future flows through maintaining influence over what large tech companies are doing as they build transformative AI, declining to contradict the state religion makes more sense—if you don't have time to win a culture war, because you need to grab hold of the Singularity (or perform a [pivotal act](https://arbital.com/p/pivotal/) to prevent it) _now_. If the progressive machine marks you as a transphobic bigot, the machine's functionaries at OpenAI or Meta AI Research are less likely to listen to you when you explain why [their safety plan](https://openai.com/blog/introducing-superalignment) won't work, or why they should have a safety plan at all. (I remarked to "Thomas" in mid-2022 that DeepMind [changing its Twitter avatar to a rainbow variant of their logo for Pride month](https://web.archive.org/web/20220607123748/https://twitter.com/DeepMind) was a bad sign.) -So isn't there a story here where I'm the villain, willfully damaging humanity's chances of survival by picking unimportant culture-war fights in the xrisk-reduction social sphere, when _I know_ that the sphere needs to keep its nose clean in the eyes of the progressive egregore? _That's_ why Yudkowsky said the arguably-technically-misleading things he said about my Something to Protect: he _had_ to, to keep our collective nose clean. The people paying attention to contemporary politics don't know what I know, and can't usefully be told. Isn't it better for humanity if my meager talents are allocated to making AI go well? Don't I have a responsibility to fall in line and take one for the team—if the world is at stake? +So isn't there a story here where I'm the villain, willfully damaging humanity's chances of survival by picking unimportant culture-war fights in the existential-risk-reduction social sphere, when _I know_ that the sphere needs to keep its nose clean in the eyes of the progressive egregore? _That's_ why Yudkowsky said the arguably-technically-misleading things he said about my Something to Protect: he had to, to keep our collective nose clean. The people paying attention to contemporary politics don't know what I know, and can't usefully be told. Isn't it better for humanity if my meager talents are allocated to making AI go well? Don't I have a responsibility to fall in line and take one for the team—if the world is at stake? As usual, the Yudkowsky of 2009 has me covered. In his short story ["The Sword of Good"](https://www.yudkowsky.net/other/fiction/the-sword-of-good), our protagonist Hirou wonders why the powerful wizard Dolf lets other party members risk themselves fighting, when Dolf could have protected them: @@ -63,7 +63,7 @@ As usual, the Yudkowsky of 2009 has me covered. In his short story ["The Sword o > > _Perhaps_, echoed the other part of himself, _but that is not what was actually happening._ -That is, there's _no story_ under which misleading people about trans issues is on Yudkowsky's critical path for shaping the intelligence explosion. _I'd_ prefer him to have free speech, but if _he_ thinks he can't afford to be honest about things he [_already_ got right in 2009](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), he could just—not issue pronouncements on topics where he intends to _ignore counterarguments on political grounds!_ +That is, there's no story under which misleading people about trans issues is on Yudkowsky's critical path for shaping the intelligence explosion. I'd prefer him to have free speech, but if _he_ thinks he can't afford to be honest about things he [already got right in 2009](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), he could just not issue pronouncements on topics where he intends to _ignore counterarguments on political grounds_. In [a March 2021 Twitter discussion about why not to trust organizations that refuse to explain their reasoning, Yudkowsky wrote](https://twitter.com/esyudkowsky/status/1374161729073020937): @@ -75,25 +75,25 @@ It's a little uncomfortable that I seem to be arguing for a duty to self-censors But I don't think it's the mere addition of the arguments to the discourse that I'm objecting to. (If some garden-variety trans ally had made the same dumb arguments, I would make the same counterarguments, but I wouldn't feel betrayed.) -It's the _false advertising_—the pretense that Yudkowsky is still the unchallengable world master of rationality, if he's going to behave like a garden-variety trans ally and reserve the right to _ignore counterarguments on political grounds_ (!!) when his incentives point that way. +It's the false advertising—the pretense that Yudkowsky is still the unchallengable world master of rationality, if he's going to behave like a garden-variety trans ally and reserve the right to _ignore counterarguments on political grounds_ (!!) when his incentives point that way. In _Planecrash_, when Keltham decides he needs to destroy Golarion's universe on negative-leaning utilitarian grounds, he takes care to only deal with Evil people from then on, and not form close ties with the Lawful Neutral nation of Osirion, in order to not betray anyone who would have had thereby a reasonable expectation that their friend wouldn't try to destroy their universe: ["the stranger from dath ilan never pretended to be anyone's friend after he stopped being their friend"](https://glowfic.com/replies/1882395#reply-1882395). Similarly, I think Yudkowsky should stop pretending to be our rationality teacher after he stopped being our rationality teacher and decided to be a politician instead. -I think it's significant that you don't see me picking fights with—say, Paul Christiano, because Paul Christiano doesn't repeatedly take a shit on my Something to Protect, because Paul Christiano _isn't trying to be a religious leader_ (in this world where religious entrepreneurs can't afford to contradict the state religion). If Paul Christiano has opinions about transgenderism, we don't know about them. If we knew about them and they were correct, I would upvote them, and if we knew about them and they were incorrect, I would criticize them, but in either case, Christiano would not try to cultivate the impression that anyone who disagrees with him is insane. That's not his bag. +I think it's significant that you don't see me picking fights with—say, Paul Christiano, because Paul Christiano doesn't repeatedly take a shit on my Something to Protect, because Paul Christiano isn't trying to be a religious leader. If Paul Christiano has opinions about transgenderism, we don't know about them. If we knew about them and they were correct, I would upvote them, and if we knew about them and they were incorrect, I would criticize them, but in either case, Christiano would not try to cultivate the impression that anyone who disagrees with him is insane. That's not his bag. ------ Yudkowsky's political cowardice is arguably puzzling in light of his timeless decision theory's recommendations against giving in to extortion. -The "arguably" is important, because randos on the internet are notoriously bad at drawing out the consequences of the theory, to the extent that Yudkowsky has said that he wishes he hadn't published—and though I think I'm smarter than the average rando, I don't expect anyone to _take my word for it_. So let me disclaim that this is _my_ explanation of how Yudkowsky's decision theory _could be interpreted_ to recommend that he behave the way I want him to, without any pretense that I'm any sort of neutral expert witness on decision theory. +The "arguably" is important, because randos on the internet are notoriously bad at drawing out the consequences of the theory, to the extent that Yudkowsky has said that he ["wish[es] that [he'd] never spoken on the topic"](https://twitter.com/ESYudkowsky/status/1509944888376188929)—and though I think I'm smarter than the average rando, I don't expect anyone to take my word for it. So let me disclaim that this is _my_ explanation of how Yudkowsky's decision theory _could be interpreted_ to recommend that he behave the way I want him to, without any pretense that I'm any sort of neutral expert witness on decision theory. -The idea of timeless decision theory is that you should choose the action that has the best consequences _given_ that your decision is mirrored at all the places your decision algorithm is embedded in the universe. +The idea of timeless decision theory is that you should choose the action that has the best consequences given that your decision is mirrored at all the places your decision algorithm is embedded in the universe. -The reason this is any different from the "causal decision theory" of just choosing the action with the best consequences (locally, without any regard to this "multiple embeddings in the universe" nonsense) is because it's possible for other parts of the universe to depend on your choices. For example, in the "Parfit's Hitchhiker" scenario, someone might give you a ride out of the desert if they _predict_ you'll pay them back later. After you've already received the ride, you might think that you can get away with stiffing them—but if they'd predicted you would do that, they wouldn't have given you the ride in the first place. Your decision is mirrored _inside the world-model every other agent with a sufficiently good knowledge of you_. +The reason this is any different from the "causal decision theory" of just choosing the action with the best consequences (locally, without any regard to this "multiple embeddings in the universe" nonsense) is because it's possible for other parts of the universe to depend on your choices. For example, in the "Parfit's Hitchhiker" scenario, someone might give you a ride out of the desert if they predict you'll pay them back later. After you've already received the ride, you might think that you can get away with stiffing them—but if they'd predicted you would do that, they wouldn't have given you the ride in the first place. Your decision is mirrored inside the world-model every other agent with a sufficiently good knowledge of you. -In particular, if you're the kind of agent that gives in to extortion—if you respond to threats of the form "Do what I want, or I'll hurt you" by doing what the threatener wants—that gives other agents an incentive to spend resources trying to extort you. On the other hand, if any would-be extortionist knows you'll never give in, they have no reason to bother trying. This is where the standard ["Don't negotiate with terrorists"](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) advice comes from. +In particular, if you're the kind of agent that gives in to extortion—if you respond to threats of the form "Do what I want, or I'll hurt you" by doing what the threatener wants—that gives other agents an incentive to spend resources trying to extort you. On the other hand, if any would-be extortionist knows you'll never give in, they have no reason to bother trying. This is where the [standard](https://en.wikipedia.org/wiki/Government_negotiation_with_terrorists) ["Don't negotiate with terrorists"](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) advice comes from. So, naïvely, doesn't Yudkowsky's "personally prudent to post your agreement with Stalin"[^gambit] gambit constitute giving in to an extortion threat of the form, "support the progressive position, or we'll hurt you", which Yudkowsky's own decision theory says not to do? @@ -107,25 +107,23 @@ Okay, but then how do I compute this "subjunctive dependence" thing? Presumably I don't know—and if I don't know, I can't say that the relevant subjunctive dependence obviously pertains in the real-life science intellectual _vs._ social justice mob match-up. If the mob has been trained from past experience to predict that their targets will give in, should you defy them now in order to somehow make your current predicament "less real"? Depending on the correct theory of logical counterfactuals, the correct stance might be "We don't negotiate with terrorists, but [we do appease bears](/2019/Dec/political-science-epigrams/) and avoid avalanches" (because neither the bear's nor the avalanche's behavior is calculated based on our response), and the forces of political orthodoxy might be relevantly bear- or avalanche-like. -On the other hand, the relevant subjunctive dependence doesn't obviously _not_ pertain, either! Yudkowsky does seem to endorse commonsense pattern-matching to "extortion" in contexts [like nuclear diplomacy](https://twitter.com/ESYudkowsky/status/1580278376673120256). Or I remember back in 'aught-nine, Tyler Emerson was caught embezzling funds from the Singularity Institute, and SingInst made it a point of pride to prosecute on decision-theoretic grounds, when a lot of other nonprofits would have quietly and causal-decision-theoretically covered it up to spare themselves the embarrassment. Parsing social justice as an agentic "threat" rather than a non-agentic obstacle like an avalanche, does seem to line up with the fact that people punish heretics (who dissent from an ideological group) more than infidels (who were never part of the group to begin with), _because_ heretics are more extortable—more vulnerable to social punishment from the original group. +On the other hand, the relevant subjunctive dependence doesn't obviously _not_ pertain, either! Yudkowsky does seem to endorse commonsense pattern-matching to "extortion" in contexts [like nuclear diplomacy](https://twitter.com/ESYudkowsky/status/1580278376673120256). Or I remember back in 2009, Tyler Emerson was caught embezzling funds from the Singularity Institute, and SingInst made it a point of pride to prosecute on decision-theoretic grounds, when a lot of other nonprofits would have quietly and causal-decision-theoretically covered it up to spare themselves the embarrassment. Parsing social justice as an agentic "threat" rather than a non-agentic obstacle like an avalanche, does seem to line up with the fact that people punish heretics (who dissent from an ideological group) more than infidels (who were never part of the group to begin with), because heretics are more extortable—more vulnerable to social punishment from the original group. -Which brings me to the second reason the naïve anti-extortion argument might fail: [what counts as "extortion" depends on the relevant "property rights", what the "default" action is](https://www.lesswrong.com/posts/Qjaaux3XnLBwomuNK/countess-and-baron-attempt-to-define-blackmail-fail). If having free speech is the default, being excluded from the dominant coalition for defying the orthodoxy could be construed as extortion. But if _being excluded from the coalition_ is the default, maybe toeing the line of orthodoxy is the price you need to pay in order to be included. +Which brings me to the second reason the naïve anti-extortion argument might fail: [what counts as "extortion" depends on the relevant "property rights", what the "default" action is](https://www.lesswrong.com/posts/Qjaaux3XnLBwomuNK/countess-and-baron-attempt-to-define-blackmail-fail). If having free speech is the default, being excluded from the dominant coalition for defying the orthodoxy could be construed as extortion. But if being excluded from the coalition is the default, maybe toeing the line of orthodoxy is the price you need to pay in order to be included. -Yudkowsky has [a proposal for how bargaining should work between agents with different notions of "fairness"](https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness). +Yudkowsky has [a proposal for how bargaining should work between agents with different notions of "fairness"](https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness). Suppose Gerald and Heather are splitting a pie, and if they can't initially agree on how to split it, they have to fight over it until they do agree, destroying some of the pie in the process. Gerald thinks the fair outcome is that they each get half the pie. Heather claims that she contributed more ingredients to the baking process and that it's therefore fair that she gets 75% of the pie, pledging to fight if offered anything less. -Suppose Edgar and Fiona are splitting a pie, and if they can't initially agree on how to split it, they have to fight over it until they do, destroying some of the pie in the process. Edgar thinks the fair outcome is that they each get half the pie. Fiona claims that she contributed more ingredients to the baking process and that it's therefore fair that she gets 75% of the pie, pledging to fight if offered anything less. +If Gerald were a causal decision theorist, he might agree to the 75/25 split, reasoning that 25% of the pie is better than fighting until the pie is destroyed. Yudkowsky argues that this is irrational: if Gerald is willing to agree to a 75/25 split, then Heather has no incentive not to adopt such a self-favoring definition of "fairness". (And _vice versa_ if Heather's concept of fairness is the "correct" one.) -If Edgar were a causal decision theorist, he might agree to the 75/25 split, reasoning that 25% of the pie is better than fighting until the pie is destroyed. Yudkowsky argues that this is irrational: if Edgar is willing to agree to a 75/25 split, then Fiona has no incentive not to adopt such a self-favoring definition of "fairness". (And _vice versa_ if Fiona's concept of fairness is the "correct" one.) +Instead, Yudkowsky argues, Gerald should behave so as to only do worse than the fair outcome if Heather also does worse: for example, by accepting a 48/32 split in Heather's favor (after 100−(32+48) = 20% of the pie has been destroyed by the costs of fighting) or an 42/18 split (where 40% of the pie has been destroyed). This isn't Pareto-optimal (it would be possible for both Gerald and Heather to get more pie by reaching an agreement with less fighting), but it's worth it to Gerald to burn some of Heather's utility fighting in order to resist being exploited by her, and at least it's better than the equilibrium where the entire pie gets destroyed (which is Nash because neither party can unilaterally stop fighting). -Instead, Yudkowsky argues, Edgar should behave so as to only do worse than the fair outcome if Fiona _also_ does worse: for example, by accepting a 48/32 split (after 100−(32+48) = 20% of the pie has been destroyed by the costs of fighting) or an 42/18 split (where 40% of the pie has been destroyed). This isn't Pareto-optimal (it would be possible for both Edgar and Fiona to get more pie by reaching an agreement with less fighting), but it's worth it to Edgar to burn some of Fiona's utility fighting in order to resist being exploited by her, and at least it's better than the equilibrium where the pie gets destroyed (which is Nash because neither party can unilaterally stop fighting). +It seemed to me that in the contest over the pie of Society's shared map, the rationalist Caliphate was letting itself get exploited by the progressive Egregore, doing worse than the fair outcome without dealing any damage to the Egregore in return. Why? -It seemed to me that in the contest over the pie of Society's shared map, the rationalist Caliphate was letting itself get exploited by the progressive Egregore, doing worse than the fair outcome without dealing any damage to the egregore in return. Why? +[The logic of dump stats](/2023/Dec/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles/#dump-stats), presumably. Bargaining to get AI risk on the shared map—not even to get it taken seriously as we would count "taking it seriously", but just acknowledged at all—was hard enough. Trying to challenge the Egregore about an item that it actually cared about would trigger more fighting than we could afford. -The logic of "dump stats", presumably. Bargaining to get AI risk on the shared map—not even to get it taken seriously as we would count "taking it seriously", but just acknowledged at all—was hard enough. Trying to challenge the Egregore about an item that it actually cared about would trigger more fighting than we could afford. +In my illustrative story, if Gerald and Heather destroy the pie fighting, then neither of them get any pie. But in more complicated scenarios (including the real world), there was no guarantee that non-Pareto Nash equilibria were equally bad for everyone. -I told the illustration about splitting a pie as a symmetrical story: if Edgar and Fiona destroy the pie fighting, than neither of them get any pie. But in more complicated scenarios (including the real world), there was no guarantee that non-Pareto Nash equilibria were equally bad for everyone. - -I'd had a Twitter exchange with Yudkowsky in January 2020 that revealed some of his current-year thinking about Nash equilibria. I [had Tweeted](https://twitter.com/zackmdavis/status/1206718983115698176): +I had a Twitter exchange with Yudkowsky in January 2020 that revealed some of his current-year thinking about Nash equilibria. I [had Tweeted](https://twitter.com/zackmdavis/status/1206718983115698176): > 1940s war criminal defense: "I was only following orders!" > 2020s war criminal defense: "I was only participating in a bad Nash equilibrium that no single actor can defy unilaterally!" @@ -140,13 +138,9 @@ I pointed out the voting case as one where he seemed to be disagreeing with his "Improved model of the social climate where revolutions are much less startable or controllable by good actors," he said. "Having spent more time chewing on Nash equilibria, and realizing that the trap is _real_ and can't be defied away even if it's very unpleasant." -In response to Sarah Constantin mentioning that there was no personal cost to voting third-party, Yudkowsky [pointed out that](https://twitter.com/ESYudkowsky/status/1216809977144168448) the problem was the [third-party spoiler effect](https://en.wikipedia.org/wiki/Vote_splitting), not personal cost: "People who refused to vote for Hillary didn't pay the price, kids in cages did, but that still makes the action nonbest." - -(The cages in question—technically, chain-link fence enclosures—were [actually](https://www.usatoday.com/story/news/factcheck/2020/08/26/fact-check-obama-administration-built-migrant-cages-meme-true/3413683001/) [built](https://apnews.com/article/election-2020-democratic-national-convention-ap-fact-check-immigration-politics-2663c84832a13cdd7a8233becfc7a5f3) during the Obama administration, but that doesn't seem important.) - -I asked what was wrong with the disjunction from "Stop Voting for Nincompoops", where the earlier Yudkowsky had written that it's hard to see who should accept the argument to vote for the lesser of two evils, but refuse to accept the argument against voting because it won't make a difference. Unilaterally voting for Clinton doesn't save the kids! +I asked what was wrong with the disjunction from "Stop Voting for Nincompoops", where the earlier Yudkowsky had written that it's hard to see who should accept the argument to vote for the lesser of two evils, but refuse to accept the argument against voting because it won't make a difference. Unilaterally voting for Clinton wouldn't stop Trump. -"Vote when you're part of a decision-theoretic logical cohort large enough to change things, or when you're worried about your reputation and want to be honest about whether you voted," Yudkowsky replied. +"Vote when you're part of a decision-theoretic logical cohort large enough to change things, or when you're worried about your reputation and want to be honest about whether you voted," he replied. "How do I compute whether I'm in a large enough decision-theoretic cohort?" I asked. Did we know that, or was that still on the open problems list? @@ -154,7 +148,7 @@ Yudkowsky said that he [traded his vote for a Clinton swing state vote](https:// The reputational argument seems in line with Yudkowsky's [pathological obsession with not-technically-lying](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). People asking if you acted against Trump are looking for a signal of coalitional loyalty. By telling them he traded his vote, Yudkowsky can pass their test without lying. -I guess that explains everything. He doesn't think he's part of a decision-theoretic logical cohort large enough to change things. He's not anticipating being asked in the future if he's acted against gender ideology. He's not worried about his reputation with people like me. +I guess that explains everything. He doesn't think he's part of a decision-theoretic logical cohort large enough to change things. He's not anticipating being asked in the future if he's acted against gender ideology. He doesn't doesn't care about trashing his reputation with me, because I don't matter. Curtis Yarvin [likes to compare](/2020/Aug/yarvin-on-less-wrong/) Yudkowsky to Sabbatai Zevi, the 17th-century Jewish religious leader purported to be the Messiah, who later [converted to Islam under coercion from the Ottomans](https://en.wikipedia.org/wiki/Sabbatai_Zevi#Conversion_to_Islam). "I know, without a shadow of a doubt, that in the same position, Eliezer Yudkowsky would also convert to Islam," said Yarvin. @@ -166,33 +160,21 @@ If in the same position as Yudkowsky, would Sabbatai Zevi also declare that 30% ----- -I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_. - -I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)). - -I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4. - -But if he's _then_ going to take a shit on c3 of my chessboard (["important things [...] would be all the things I've read [...] from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate", "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he"'"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), the "playing on a different chessboard, no harm intended" excuse loses its credibility. The turd on c3 is a pretty big likelihood ratio! (That is, I'm more likely to observe a turd on c3 in worlds where Yudkowsky _is_ playing my chessboard and wants me to lose, than in world where he's playing on a different chessboard and just _happened_ to take a shit there, by coincidence.) - ------ - In June 2021, MIRI Executive Director Nate Soares [wrote a Twitter thread aruging that](https://twitter.com/So8res/status/1401670792409014273) "[t]he definitional gynmastics required to believe that dolphins aren't fish are staggering", which [Yudkowsky retweeted](https://archive.is/Ecsca).[^not-endorsements] [^not-endorsements]: In general, retweets are not necessarily endorsements—sometimes people just want to draw attention to some content without further comment or implied approval—but I was inclined to read this instance as implying approval, partially because this doesn't seem like the kind of thing someone would retweet for attention-without-approval, and partially because of the working relationship between Soares and Yudkowsky. -Soares's points seemed cribbed from part I of Scott Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/), which post I had just dedicated more than three years of my life to rebutting in [increasing](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) [technical](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [detail](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), specifically using dolphins as my central example—which Soares didn't necessarily have any reason to have known about, but Yudkowsky (who retweeted Soares) definitely did. (Soares's [reference to the Book of Jonah](https://twitter.com/So8res/status/1401670796997660675) made it seem particularly unlikely that he had invented the argument independently from Alexander.) [One of the replies (which Soares Liked) pointed out the similar _Slate Star Codex_ article](https://twitter.com/max_sixty/status/1401688892940509185), [as did](https://twitter.com/NisanVile/status/1401684128450367489) [a couple of](https://twitter.com/roblogic_/status/1401699930293432321) quote-Tweet discussions. +Soares's points seemed cribbed from part I of Scott Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/). Soares's [reference to the Book of Jonah](https://twitter.com/So8res/status/1401670796997660675) made it seem particularly unlikely that he had invented the argument independently from Alexander. [One of the replies (which Soares Liked) pointed out the similar _Slate Star Codex_ article](https://twitter.com/max_sixty/status/1401688892940509185), [as did](https://twitter.com/NisanVile/status/1401684128450367489) [a couple of](https://twitter.com/roblogic_/status/1401699930293432321) quote-Tweet discussions. -The elephant in my brain took this as another occasion to _flip out_. I didn't immediately see anything for me to overtly object to in the thread itself—[I readily conceded that](https://twitter.com/zackmdavis/status/1402073131276066821) there was nothing necessarily wrong with wanting to use the symbol "fish" to refer to the cluster of similarities induced by convergent evolution to the acquatic habitat rather than the cluster of similarities induced by phylogenetic relatedness—but in the context of our subculture's history, I read this as Soares and Yudkowsky implicitly lending more legitimacy to "... Not Man for the Categories", which was hostile to my interests. Was I paranoid to read this as a potential [dogwhistle](https://en.wikipedia.org/wiki/Dog_whistle_(politics))? It just seemed implausible that Soares would be Tweeting that dolphins are fish in the counterfactual in which "... Not Man for the Categories" had never been published. +The elephant in my brain took this as another occasion to _flip out_. I didn't immediately see anything for me to overtly object to in the thread itself—[I readily conceded that](https://twitter.com/zackmdavis/status/1402073131276066821) there was nothing necessarily wrong with wanting to use the symbol "fish" to refer to the cluster of similarities induced by convergent evolution to the acquatic habitat rather than the cluster of similarities induced by phylogenetic relatedness—but in the context of our subculture's history, I read this as Soares and Yudkowsky implicitly lending more legitimacy to "... Not Man for the Categories", which post I had just dedicated more than three years of my life to rebutting in [increasing](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) [technical](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries) [detail](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception), specifically using dolphins as my central example—which Soares didn't necessarily have any reason to have known about, but Yudkowsky definitely did. Was I paranoid to read this as a potential [dogwhistle](https://en.wikipedia.org/wiki/Dog_whistle_(politics))? It just seemed implausible that Soares would be Tweeting that dolphins are fish in the counterfactual in which "... Not Man for the Categories" had never been published. -After a little more thought, I decided that the thread _was_ overtly objectionable, and [quickly wrote up a reply on _Less Wrong_](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins): Soares wasn't merely advocating for a "swimmy animals" sense of the word _fish_ to become more accepted usage, but specifically deriding phylogenetic definitions as unmotivated for everyday use ("definitional gynmastics [_sic_]"!), and _that_ was wrong. It's true that most language users don't directly care about evolutionary relatedness, but [words aren't identical with their definitions](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels). Genetics is at the root of the causal graph underlying all other features of an organism; creatures that are more closely evolutionarily related are more similar in general. Classifying things by evolutionary lineage isn't an arbitrary æsthetic whim by people who care about geneology for no reason. We need the natural category of "mammals (including marine mammals)" to make sense of how dolphins are warm-blooded, breathe air, and nurse their live-born young, and the natural category of "finned cold-blooded vertebrate gill-breathing swimmy animals (which excludes marine mammals)" is also something that it's reasonable to have a word for. +After a little more thought, I decided that Soares's thread _was_ overtly objectionable, and [quickly wrote up a reply on _Less Wrong_](https://www.lesswrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins): Soares wasn't merely advocating for a "swimmy animals" sense of the word _fish_ to become more accepted usage, but specifically deriding phylogenetic definitions as unmotivated for everyday use ("definitional gynmastics [_sic_]"), and _that_ was wrong. It's true that most language users don't directly care about evolutionary relatedness, but [words aren't identical with their definitions](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels). Genetics is at the root of the causal graph underlying all other features of an organism; creatures that are more closely evolutionarily related are more similar in general. Classifying things by evolutionary lineage isn't an arbitrary æsthetic whim by people who care about geneology for no reason. We need the natural category of "mammals (including marine mammals)" to make sense of how dolphins are warm-blooded, breathe air, and nurse their live-born young, and the natural category of "finned cold-blooded vertebrate gill-breathing swimmy animals (which excludes marine mammals)" is also something that it's reasonable to have a word for. (Somehow, it felt appropriate to use a quote from Arthur Jensen's ["How Much Can We Boost IQ and Scholastic Achievement?"](https://en.wikipedia.org/wiki/How_Much_Can_We_Boost_IQ_and_Scholastic_Achievement%3F) as an epigraph.) On [Twitter](https://twitter.com/So8res/status/1402888263593959433) Soares conceded my main points, but said that the tone, and the [epistemic-status followup thread](https://twitter.com/So8res/status/1401761124429701121), were intended to indicate that the original thread was "largely in jest"—"shitposting"—but that he was "open to arguments that [he was] making a mistake here." -I didn't take that too well, and threw an eleven-Tweet tantrum. I somewhat regret this. My social behavior during this entire episode was histrionic, and I probably could have gotten an equal-or-better outcome if I had kept my cool. The reason I didn't want to keep my cool was because after years of fighting this Category War, MIRI doubling down on "dolphins are fish" felt like a gratuitous insult. I was used to "rationalist" leaders ever-so-humbly claiming to be open to arguments that they were making a mistake, but I couldn't take such assurances seriously if they were going to keep sending PageRank-like credibility to "... Not Man for the Categories". - -Soares wrote a longer comment on _Less Wrong_ the next morning, and I [pointed out that](https://www.greaterwrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins/comment/BBtSuWcdaFyvgddE4) Soares's followup thread had lamented ["the fact that nobody's read A Human's Guide to Words or w/​e"](https://twitter.com/So8res/status/1401761130041659395), but—with respect—he wasn't behaving like _he_ had read it. Specifically, [#30](https://www.greaterwrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) on the list of ["37 Ways Words Can Be Wrong"](https://www.greaterwrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) had characterized the position that dolphins are fish as "playing nitwit games". This didn't seem controversial at the time in 2008. +I didn't take that too well, and threw an eleven-Tweet tantrum. Soares wrote a longer comment on _Less Wrong_ the next morning, and I [pointed out that](https://www.greaterwrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins/comment/BBtSuWcdaFyvgddE4) Soares's followup thread had lamented ["the fact that nobody's read A Human's Guide to Words or w/​e"](https://twitter.com/So8res/status/1401761130041659395), but—with respect—he wasn't behaving like _he_ had read it. Specifically, [#30](https://www.greaterwrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) on the list of ["37 Ways Words Can Be Wrong"](https://www.greaterwrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) had characterized the position that dolphins are fish as "playing nitwit games". This didn't seem controversial in 2008. And yet it would seem that sometime between 2008 and the current year, the "rationalist" party line (as observed in the public statements of SingInst/​MIRI leadership) on whether dolphins are fish shifted from (my paraphrases) "No; _despite_ the surface similarities, that categorization doesn't carve reality at the joints; stop playing nitwit games" to "Yes, _because_ of the surface similarities; those who contend otherwise are the ones playing nitwit games." A complete 180° reversal, on this specific example! Why? What changed? @@ -202,13 +184,15 @@ But when people change their mind due to new arguments, you'd expect them to ack Soares wrote [a comment explaining](https://www.greaterwrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins/comment/HwSkiN62QeuEtGWpN) why he didn't think it was such a large reversal. I [started drafting a counterreply](/ancillary/dolphin-war/), but decided that it would need to become a full post on the timescale of days or weeks, partially because I needed to think through how to reply to Soares about paraphyletic groups, and partially because the way the associated Twitter discussion had gone (including some tussling with Yudkowsky) made me want to modulate my tone. (I noted that I had probably lost some in-group credibility in the Twitter fight, but the information gained seemed more valuable. Losing in-group credibility didn't hurt so much when I didn't respect the group anymore.) -I was feeling some subjective time pressure on my reply, and in the meantime, I ended up adding [a histrionic comment](https://www.greaterwrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins/comment/rMHcWfqkH89LWt4y9) to the _Less Wrong_ thread taking issue with Soares's still-flippant tone. That was a terrible performance on my part. (It got downvoted to oblivion, and I deserved it.) +Subjectively, I was feeling time pressure on my reply, and in the meantime, I ended up adding [a huffy comment](https://www.greaterwrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins/comment/rMHcWfqkH89LWt4y9) to the _Less Wrong_ thread taking issue with Soares's still-flippant tone. That was a terrible performance on my part. It got downvoted to oblivion, and I deserved it. + +In general, my social behavior during this entire episode was histrionic, and I probably could have gotten an equal-or-better outcome if I had kept my cool. The reason I didn't feel like keeping my cool was because after years of fighting this Category War, MIRI doubling down on "dolphins are fish" felt like a gratuitous insult. I was used to "rationalists" ever-so-humbly claiming to be open to arguments that they were making a mistake, but I couldn't take such assurances seriously if they were going to keep sending PageRank-like credibility to "... Not Man for the Categories". Soares [wrote that](https://www.greaterwrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins/comment/8nmjnrm4cwgCCyYrG) I was persistently mis-modeling his intentions, that I seemed to be making a plea for deference that he rejected. -I don't think I wanted deference. I write these thousands of words in the hopes that people will read my arguments and think it through for themselves; I would never expect anyone to take my word for the conclusion. What I was hoping for was a fair hearing, and by that point, I had lost hope of getting one. +I don't think I wanted deference, though. I write these thousands of words in the hopes that people will read my arguments and think it through for themselves; I would never expect anyone to take my word for the conclusion. What I was hoping for was a fair hearing, and by that point, I had lost hope of getting one. -As for my skill at modeling intent, I think it's less relevant than Soares seemed to think (if I don't err in attributing to him the belief that modeling intent is important). I believe Soares's self-report that he wasn't trying to make a coded statement about gender; my initial impression otherwise _was_ miscalibrated. (As Soares pointed out, his "dolphins are fish" position could be given an "anti-trans" interpretation, too, in the form of "you intellectuals get your hands off my intuitive concepts". The association between "dolphins are fish" and "trans women are women" ran through their conjunction in Alexander's "... Not Man for the Categories", rather than being intrinsic to the beliefs themselves.) +As for my skill at modeling intent, I think it's less relevant than Soares seemed to think. I believe his self-report that he wasn't trying to make a coded statement about gender; my initial impression otherwise _was_ miscalibrated. (As he pointed out, his "dolphins are fish" position could be given an "anti-trans" interpretation, too, in the form of "you intellectuals get your hands off my intuitive concepts". The association between "dolphins are fish" and "trans women are women" ran through their conjunction in Alexander's "... Not Man for the Categories", rather than being intrinsic to the beliefs themselves.) The thing is, I was _right_ to notice the similarity between Soares's argument and "... Not Man for the Categories." Soares's [own account](https://www.greaterwrong.com/posts/aJnaMv8pFQAfi9jBm/reply-to-nate-soares-on-dolphins/comment/HwSkiN62QeuEtGWpN) agreed that there was a causal influence. Okay, so _Nate_ wasn't trying to play gender politics; Scott just alerted him to the idea that people didn't used to be interested in drawing their categories around phylogenetics, and Nate ran with that thought. @@ -216,19 +200,17 @@ So where did _Scott_ get it from? I think he pulled it out of his ass because it was politically convenient. I think if you asked Scott Alexander whether dolphins are fish in 2012, he would have said, "No, they're mammals," like any other educated adult. -In a world where the clock of "political time" had run a little bit slower, such that the fight for gay marriage had taken longer [such that the progressive _zeitgeist_ hadn't pivoted to trans as the new cause _du jour_](/2019/Aug/the-social-construction-of-reality-and-the-sheer-goddamned-pointlessness-of-reason/), I don't think Alexander would have had the occasion to write "... Not Man for the Categories." And in that world, I don't think "Dolphins are fish, fight me" or "Acknowledge that all of our categories are weird and a little arbitrary" would have become _memes_ in our subculture. +In a world where the clock of "political time" had run a little bit slower, such that the fight for gay marriage had taken longer [such that the progressive _zeitgeist_ hadn't pivoted to trans as the new cause _du jour_](/2019/Aug/the-social-construction-of-reality-and-the-sheer-goddamned-pointlessness-of-reason/), I don't think Alexander would have had the occasion to write "... Not Man for the Categories." And in that world, I don't think "Dolphins are fish, fight me" or "Acknowledge that all of our categories are weird and a little arbitrary" would have become memes in our subculture. This case is like [radiocontrast dye](https://en.wikipedia.org/wiki/Radiocontrast_agent) for [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology). Because Scott Alexander won [the talent lottery](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) and writes faster than everyone else, he has the power to _sneeze his mistakes_ onto everyone who trusts Scott to have done his homework, even when he obviously hasn't. -[No one can think fast enough to think all their own thoughts.](https://www.lesswrong.com/posts/2MD3NMLBPCqPfnfre/cached-thoughts), but you would hope for an intellectual community that can do error-correction, rather than copying smart people's views including mistakes? +[No one can think fast enough to think all their own thoughts](https://www.lesswrong.com/posts/2MD3NMLBPCqPfnfre/cached-thoughts), but you would hope for an intellectual community that can do error-correction, such that collective belief trends toward truth as [the signal of good arguments rises above the noise](https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/), rather than being copied from celebrity leaders (including the mistakes). -To be sure, it's true that there's a cluster of similarities induced by adaptations to the acquatic environment. It's reasonable to want to talk about that subspace. But it doesn't follow that phylogenetics is irrelevant. +It's true that there's a cluster of similarities induced by adaptations to the acquatic environment. It's reasonable to want to talk about that subspace. But it doesn't follow that phylogenetics is irrelevant. Genetics being at the root of the causal graph induces the kind of conditional independence relationships that make "categories" a useful AI trick. -Genetics is at the root of the causal graph of all other traits of an organism, which induces the kind of conditional independence relationships that make "categories" a useful AI trick. +But in a world where more people are reading "... Not Man for the Categories" than ["Mutual Information, and Density in Thingspace"](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace), and even the people who have read "Density in Thingspace" (once, ten years ago) are having most of their conversations with people who only read "... Not Man for the Categories"—what happens is that you end up with a so-called "rationalist" culture that completely forgot the hidden-Bayesian-structure-of-cognition/carve-reality-at-the-joints skill. People only remember the subset of "A Human's Guide to Words" that's useful for believing whatever you want (by cherry-picking the features you need to include in category Y to make your favorite "X is a Y" sentence look "true", which is easy for intricate high-dimensional things like biological creatures that have a lot of similarities to cherry-pick from), rather than the part about the conditional independence structure in the environment. -But in a world where more people are reading "... Not Man for the Categories" than ["Mutual Information, and Density in Thingspace"](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace), and even the people who have read "Density in Thingspace" (once, ten years ago) are having most of their conversations with people who only read "... Not Man for the Categories"—what happens is that you end up with a so-called "rationalist" culture that completely forgot the hidden-Bayesian-structure-of-cognition/carve-reality-at-the-joints skill! People only remember the specific subset of "A Human's Guide to Words" that's useful for believing whatever you want (by cherry-picking the features you need to include in category Y to make your favorite "X is a Y" sentence look "true", which is easy for intricate high-dimensional things like biological creatures that have a lot of similarities to cherry-pick from), rather than the part about the conditional independence structure in the environment. - -After I cooled down, I did eventually write up the explanation for why paraphyletic categories are okay, in ["Blood Is Thicker Than Water"](https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water). But I'm not sure that anyone cared. +After I cooled down, I did eventually write up the explanation for why paraphyletic categories are fine, in ["Blood Is Thicker Than Water"](https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water). But I'm not sure that anyone cared. -------- @@ -242,7 +224,7 @@ It wouldn't be so bad if Yudkowsky weren't trying to sell himself as a _de facto [^religious-leader]: "Religious leader" continues to seem like an apt sociological description, even if [no supernatural claims are being made](https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural). -But he does seem to actively encourage this conflation. Contrast the [Litany Against Gurus](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus) from the Sequences, to the way he sneers at "post-rationalists"—or even "Earthlings" in general (in contrast to his fictional world of dath ilan). The framing is optimized to delegitimize dissent. [Motte](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/): someone who's critical of central "rationalists" like Yudkowsky or Alexander; bailey: someone who's moved beyond reason itself. +But he does seem to actively encourage this conflation. Contrast the ["Litany Against Gurus"](https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus) from the Sequences, to the way he sneers at "post-rationalists"—or even "Earthlings" in general (in contrast to his fictional world of dath ilan). The framing is optimized to delegitimize dissent. [Motte](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/): someone who's critical of central "rationalists" like Yudkowsky or Alexander; bailey: someone who's moved beyond reason itself. One example that made me furious came in September 2021. Yudkowsky, replying to Scott Alexander on Twitter, [wrote](https://twitter.com/ESYudkowsky/status/1434906470248636419): @@ -252,7 +234,7 @@ I understand, of course, that it was meant as humorous exaggeration. But I think [^years-of-my-life]: I started outlining ["The Categories Where Made for Man to Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) in January 2018. I would finally finish ["Blood Is Thicker Than Water"](https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water), following up on the "dolphins are fish" claim later that month of September 2021. -Or [as Yudkowsky had once put it](https://www.facebook.com/yudkowsky/posts/10154981483669228)— +Or [as Yudkowsky had once put it](https://www.facebook.com/yudkowsky/posts/10154981483669228): > I know that it's a bad sign to worry about which jokes other people find funny. But you can laugh at jokes about Jews arguing with each other, and laugh at jokes about Jews secretly being in charge of the world, and not laugh at jokes about Jews cheating their customers. Jokes do reveal conceptual links and some conceptual links are more problematic than others. @@ -260,31 +242,17 @@ It's totally understandable to not want to get involved in a political scuffle b An analogy: racist jokes are also just jokes. Alice says, "What's the difference between a black dad and a boomerang? A boomerang comes back." Bob says, "That's super racist! Tons of African-American fathers are devoted parents!!" Alice says, "Chill out, it was just a joke." In a way, Alice is right. It was just a joke; no sane person could think that Alice was literally claiming that all black men are deadbeat dads. But the joke only makes sense in the first place in context of a culture where the black-father-abandonment stereotype is operative. If you thought the stereotype was false, or if you were worried about it being a self-fulfilling prophecy, you would find it tempting to be a humorless scold and get angry at the joke-teller. -Similarly, the "Caliphate" humor only makes sense in the first place in the context of a celebrity culture where deferring to Yudkowsky and Alexander is expected behavior. (In a way that deferring to Julia Galef or John S. Wentworth is not expected behavior, even if Galef and Wentworth also have a track record as good thinkers.) I think this culture is bad. _Nullius in verba_. +Similarly, the "Caliphate" humor only makes sense in the first place in the context of a celebrity culture where deferring to Yudkowsky and Alexander is expected behavior, in a way that deferring to Julia Galef or John S. Wentworth is not expected behavior. -I don't think the motte-and-bailey concern is hypothetical, either. When I [indignantly protested](https://twitter.com/zackmdavis/status/1435059595228053505) the "we're both always right" remark, one David Xu [commented](https://twitter.com/davidxu90/status/1435106339550740482): "speaking as someone who's read and enjoyed your LW content, I do hope this isn't a sign that you're going full post-rat"—as if my criticism of Yudkowsky's self-serving bluster itself marked me as siding with the "post-rats"! +I don't think the motte-and-bailey concern is hypothetical. When I [indignantly protested](https://twitter.com/zackmdavis/status/1435059595228053505) the "we're both always right" remark, one David Xu [commented](https://twitter.com/davidxu90/status/1435106339550740482): "speaking as someone who's read and enjoyed your LW content, I do hope this isn't a sign that you're going full post-rat"—as if my criticism of Yudkowsky's self-serving bluster itself marked me as siding with the "post-rats"! I once wrote [a post whimsically suggesting that trans women should owe cis women royalties](/2019/Dec/comp/) for copying the female form (as "intellectual property"). In response to a reader who got offended, I [ended up adding](/source?p=Ultimately_Untrue_Thought.git;a=commitdiff;h=03468d274f5) an "epistemic status" line to clarify that it was not a serious proposal. -But if knowing it was a joke partially mollifies the offended reader who thought I might have been serious, I don't think they should be _completely_ mollified, because the joke (while a joke) reflects something about my thinking when I'm being serious: I don't think sex-based collective rights are inherently a suspect idea; I think _something of value has been lost_ when women who want female-only spaces can't have them, and the joke reflects the conceptual link between the idea that something of value has been lost, and the idea that people who have lost something of value are entitled to compensation. - -At "Arcadia"'s 2022 [Smallpox Eradication Day](https://twitter.com/KelseyTuoc/status/1391248651167494146) party, I remember overhearing[^overhearing] Yudkowsky saying that OpenAI should have used GPT-3 to mass-promote the Moderna COVID-19 vaccine to Republicans and the Pfizer vaccine to Democrats (or vice versa), thereby harnessing the forces of tribalism in the service of public health. - -[^overhearing]: I claim that conversations at a party with lots of people are not protected by privacy norms; if I heard it, several other people heard it; no one had a reasonable expectation that I shouldn't blog about it. - -I assume this was not a serious proposal. Knowing it was a joke partially mollifies what offense I would have taken if I thought he might have been serious. But I don't think I should be completely mollified, because I think I think the joke (while a joke) reflects something about Yudkowsky's thinking when he's being serious: that he apparently doesn't think corupting Society's shared maps for utilitarian ends is inherently a suspect idea; he doesn't think truthseeking public discourse is a thing in our world, and the joke reflects the conceptual link between the idea that public discourse isn't a thing, and the idea that a public that can't reason needs to be manipulated by elites into doing good things rather than bad things. - -My favorite Ben Hoffman post is ["The Humility Argument for Honesty"](http://benjaminrosshoffman.com/humility-argument-honesty/). It's sometimes argued the main reason to be honest is in order to be trusted by others. (As it is written, ["[o]nce someone is known to be a liar, you might as well listen to the whistling of the wind."](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings).) Hoffman points out another reason: we should be honest because others will make better decisions if we give them the best information available, rather than worse information that we chose to present in order to manipulate their behavior. If you want your doctor to prescribe you a particular medication, you might be able to arrange that by looking up the symptoms of an appropriate ailment on WebMD, and reporting those to the doctor. But if you report your _actual_ symptoms, the doctor can combine that information with their own expertise to recommend a better treatment. - -If you _just_ want the public to get vaccinated, I can believe that the Pfizer/Democrats _vs._ Moderna/Republicans propaganda gambit would work. You could even do it without telling any explicit lies, by selectively citing the either the protection or side-effect statistics for each vaccine depending on whom you were talking to. One might ask: if you're not _lying_, what's the problem? + But if knowing it was a joke partially mollifies the offended reader who thought I might have been serious, I don't think they should be completely mollified, because the joke (while a joke) reflects something about my thinking when I'm being serious: I don't think sex-based collective rights are inherently a crazy idea; I think something of value has been lost when women who want female-only spaces can't have them, and the joke reflects the conceptual link between the idea that something of value has been lost, and the idea that people who have lost something of value are entitled to compensation. -The _problem_ is that manipulating people into doing what you want subject to the genre constraint of not telling any explicit lies, isn't the same thing as informing people so that they can make sensible decisions. In reality, both mRNA vaccines are very similar! It would be surprising if the one associated with my political faction happened to be good, whereas the one associated with the other faction happened to be bad. Someone who tried to convince me that Pfizer was good and Moderna was bad would be misinforming me—trying to trap me in a false reality, a world that doesn't quite make sense—with [unforseeable consequences](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) for the rest of my decisionmaking. As someone with an interest in living in a world that makes sense, I have reason to regard this as _hostile action_, even if the false reality and the true reality both recommend the isolated point decision of getting vaccinated. -(The authors of the [HEXACO personality model](https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure) may have gotten something importantly right in [grouping "honesty" and "humility" as a single factor](https://en.wikipedia.org/wiki/Honesty-humility_factor_of_the_HEXACO_model_of_personality).) -I'm not, overall, satisfied with the political impact of my writing on this blog. One could imagine someone who shared Yudkowsky's apparent disbelief in public reason advising me that my practice of carefully explaining at length what I believe and why, has been an ineffective strategy—that I should instead clarify to myself what policy goal I'm trying to acheive, and try to figure out some clever gambit to play trans activists and gender-critical feminists against each other in a way that advances my agenda. -From my perspective, such advice would be missing the point. [I'm not trying to force though some particular policy.](/2021/Sep/i-dont-do-policy/) Rather, I think I _know some things_ about the world, things I wish I had someone had told me earlier. So I'm trying to tell others, to help them live in _a world that makes sense_. ------ diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 51cee3c..34020b5 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1,4 +1,4 @@ -first edit pass bookmark: (top of pt. 5) +first edit pass bookmark: "I got a chance to talk to" pt. 3 edit tier— _ footnote on the bad-faith condition on "My Price for Joining" @@ -23,6 +23,7 @@ _ try to clarify Abram's categories view (Michael didn't get it) (but it still s _ explicitly mention http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/ _ meeting with Ray (maybe?) _ friends with someone on an animal level, like with a dog +_ "Density in Thingspace" comment (maybe a footnote in the § explaining the background to "Unnatural Categories") pt. 4 edit tier— _ body odors comment @@ -33,11 +34,17 @@ _ if you only say good things about Republican candidates _ to-be-continued ending about how being a fraud might be a good idea _ cite more sneers; use a footnote to pack in as many as possible _ Litany Against Gurus, not sure humans can think and trust at the same time; High Status and Stupidity +_ honesty and humility, HEXACO pt. 5 edit tier— _ Previously-on summary +_ graf about Christiano could use a rewrite +_ Dolphin War: after "never been published", insert "still citing it" graf +_ Dolphin War: simplify sentence structure around "by cherry-picking the features" _ quote specific exchange where I mentioned 10,000 words of philosophy that Scott was wrong—obviously the wrong play -_ "as Soares pointed out" needs link +_ Meghan Murphy got it down to four words +_ Dolphin War needs more Twitter links: "as Soares pointed out" needs link, "threw an eleven-Tweet tantrum" (probably screenshot), tussling +_ end of voting conversation needs links _ can I rewrite to not bury the lede on "intent doesn't matter"? _ also reference "No such thing as a tree" in Dolphin War section _ better brief explanation of dark side epistemology @@ -61,7 +68,7 @@ _ the hill he wants to die on _ humans have honor instead of TDT. "That's right! I'm appealing to your honor!" _ Leeroy Jenkins Option _ historical non-robot-cult rationality wisdom -_ Meghan Murphy got it down to four words +_ work in the "some clever gambit to play trans activists and gender-critical feminists against each other" things to discuss with Michael/Ben/Jessica— _ Anna on Paul Graham @@ -169,7 +176,6 @@ _ Caliphate / craft and the community _ colony ship happiness lie in https://www.lesswrong.com/posts/AWaJvBMb9HGBwtNqd/qualitative-strategies-of-friendliness _ re being fair to abusers: thinking about what behavior pattern you would like to see, generally, by people in your situation, instead of privileging the perspective and feelings of people who've made themselves vulnerable to you by transgressing against you _ worry about hyperbole/jumps-to-evaluation; it destroys credibility -_ "Density in Thingspace" comment _ Christmas with Scott: mention the destruction of "voluntary"? _ Christmas with Scott: mention the grid of points? _ dath ilan as a whining-based community @@ -2803,3 +2809,33 @@ https://hpmor.com/chapter/97 https://www.greaterwrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards/comment/TcsXh44pB9xRziGgt > A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work. + +------- + +I like to imagine that they have a saying out of dath ilan: once is happenstance; twice is coincidence; _three times is hostile optimization_. + +I could forgive him for taking a shit on d4 of my chessboard (["at least 20% of the ones with penises are actually women"](https://www.facebook.com/yudkowsky/posts/10154078468809228)). + +I could even forgive him for subsequently taking a shit on e4 of my chessboard (["you're not standing in defense of truth if you insist on a word [...]"](https://twitter.com/ESYudkowsky/status/1067198993485058048)) as long as he wiped most of the shit off afterwards (["you are being the bad guy if you try to shut down that conversation by saying that 'I can define the word "woman" any way I want'"](https://www.facebook.com/yudkowsky/posts/10158853851009228)), even though, really, I would have expected someone so smart to take a hint after the incident on d4. + +But if he's _then_ going to take a shit on c3 of my chessboard (["important things [...] would be all the things I've read [...] from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate", "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he"'"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), the "playing on a different chessboard, no harm intended" excuse loses its credibility. The turd on c3 is a pretty big likelihood ratio! (That is, I'm more likely to observe a turd on c3 in worlds where Yudkowsky _is_ playing my chessboard and wants me to lose, than in world where he's playing on a different chessboard and just _happened_ to take a shit there, by coincidence.) + +(The authors of the [HEXACO personality model](https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure) may have gotten something importantly right in [grouping "honesty" and "humility" as a single factor](https://en.wikipedia.org/wiki/Honesty-humility_factor_of_the_HEXACO_model_of_personality).) + +------ + +At "Arcadia"'s 2022 [Smallpox Eradication Day](https://twitter.com/KelseyTuoc/status/1391248651167494146) party, I remember overhearing[^overhearing] Yudkowsky saying that OpenAI should have used GPT-3 to mass-promote the Moderna COVID-19 vaccine to Republicans and the Pfizer vaccine to Democrats (or vice versa), thereby harnessing the forces of tribalism in the service of public health. + +[^overhearing]: I claim that conversations at a party with lots of people are not protected by privacy norms; if I heard it, several other people heard it; no one had a reasonable expectation that I shouldn't blog about it. + +I assume this was not a serious proposal. Knowing it was a joke partially mollifies what offense I would have taken if I thought he might have been serious. But I don't think I should be completely mollified, because I think I think the joke (while a joke) reflects something about Yudkowsky's thinking when he's being serious: that he apparently doesn't think corupting Society's shared maps for utilitarian ends is inherently a suspect idea; he doesn't think truthseeking public discourse is a thing in our world, and the joke reflects the conceptual link between the idea that public discourse isn't a thing, and the idea that a public that can't reason needs to be manipulated by elites into doing good things rather than bad things. + +My favorite Ben Hoffman post is ["The Humility Argument for Honesty"](http://benjaminrosshoffman.com/humility-argument-honesty/). It's sometimes argued the main reason to be honest is in order to be trusted by others. (As it is written, ["[o]nce someone is known to be a liar, you might as well listen to the whistling of the wind."](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings).) Hoffman points out another reason: we should be honest because others will make better decisions if we give them the best information available, rather than worse information that we chose to present in order to manipulate their behavior. If you want your doctor to prescribe you a particular medication, you might be able to arrange that by looking up the symptoms of an appropriate ailment on WebMD, and reporting those to the doctor. But if you report your _actual_ symptoms, the doctor can combine that information with their own expertise to recommend a better treatment. + +If you _just_ want the public to get vaccinated, I can believe that the Pfizer/Democrats _vs._ Moderna/Republicans propaganda gambit would work. You could even do it without telling any explicit lies, by selectively citing the either the protection or side-effect statistics for each vaccine depending on whom you were talking to. One might ask: if you're not _lying_, what's the problem? + +The _problem_ is that manipulating people into doing what you want subject to the genre constraint of not telling any explicit lies, isn't the same thing as informing people so that they can make sensible decisions. In reality, both mRNA vaccines are very similar! It would be surprising if the one associated with my political faction happened to be good, whereas the one associated with the other faction happened to be bad. Someone who tried to convince me that Pfizer was good and Moderna was bad would be misinforming me—trying to trap me in a false reality, a world that doesn't quite make sense—with [unforseeable consequences](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) for the rest of my decisionmaking. As someone with an interest in living in a world that makes sense, I have reason to regard this as _hostile action_, even if the false reality and the true reality both recommend the isolated point decision of getting vaccinated. + +I'm not, overall, satisfied with the political impact of my writing on this blog. One could imagine someone who shared Yudkowsky's apparent disbelief in public reason advising me that my practice of carefully explaining at length what I believe and why, has been an ineffective strategy—that I should instead clarify to myself what policy goal I'm trying to acheive, and try to figure out some clever gambit to play trans activists and gender-critical feminists against each other in a way that advances my agenda. + +From my perspective, such advice would be missing the point. [I'm not trying to force though some particular policy.](/2021/Sep/i-dont-do-policy/) Rather, I think I know some things about the world, things I wish I had someone had told me earlier. So I'm trying to tell others, to help them live in a world that makes sense. diff --git a/notes/memoir_wordcounts.csv b/notes/memoir_wordcounts.csv index a6ef84a..2065f80 100644 --- a/notes/memoir_wordcounts.csv +++ b/notes/memoir_wordcounts.csv @@ -563,4 +563,5 @@ 10/31/2023,119625,-187 11/01/2023,119647,22 11/02/2023,119737,90 -11/03/2023,, +11/03/2023,118609,-1128 +11/04/2023,, -- 2.17.1