From: Zack M. Davis Date: Mon, 25 Dec 2023 22:48:23 +0000 (-0800) Subject: memoir: provisional pt. 5 § headers X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=34ccbce50888fc9da854dcef02571aef39cb7b4a;p=Ultimately_Untrue_Thought.git memoir: provisional pt. 5 § headers --- diff --git a/content/drafts/guess-ill-die.md b/content/drafts/guess-ill-die.md index 53abc85..0ff8398 100644 --- a/content/drafts/guess-ill-die.md +++ b/content/drafts/guess-ill-die.md @@ -13,6 +13,10 @@ In a previous post, ["Agreeing With Stalin in Ways that Exhibit Generally Ration But I would be remiss to condemn Yudkowsky without discussing potentially mitigating factors. (I don't want to say that whether someone is a fraud should depend on whether there are mitigating factors—rather, I should discuss potential reasons why being a fraud might be the least-bad choice, when faced with a sufficiently desperate situation.) +[TOC] + +### Short Timelines _vs._ Raising the Sanity Waterline [working § title] + So far, I've been writing from the perspective of caring (and expecting Yudkowsky to care) about human rationality as a cause in its own right—about wanting to make sense, and wanting to live in a Society that made sense, for its own sake, not as a convergently instrumental subgoal of saving the world. That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science blogging was just that good. I did do a little bit of work for the Singularity Institute back in the day (a "we don't pay you, but you can sleep in the garage" internship in 2009, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna Salamon, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic. @@ -53,6 +57,8 @@ But if you think the only hope for there _being_ a future flows through maintain (I remarked to "Thomas" in mid-2022 that DeepMind [changing its Twitter avatar to a rainbow variant of their logo for Pride month](https://web.archive.org/web/20220607123748/https://twitter.com/DeepMind) was a bad sign.) +### Perhaps, if the World Were at Stake + So isn't there a story here where I'm the villain, willfully damaging humanity's chances of survival by picking unimportant culture-war fights in the existential-risk-reduction social sphere, when _I know_ that the sphere needs to keep its nose clean in the eyes of the progressive egregore? _That's_ why Yudkowsky said the arguably-technically-misleading things he said about my Something to Protect: he had to, to keep our collective nose clean. The people paying attention to contemporary politics don't know what I know, and can't usefully be told. Isn't it better for humanity if my meager talents are allocated to making AI go well? Don't I have a responsibility to fall in line and take one for the team—if the world is at stake? As usual, the Yudkowsky of 2009 has me covered. In his short story ["The Sword of Good"](https://www.yudkowsky.net/other/fiction/the-sword-of-good), our protagonist Hirou wonders why the powerful wizard Dolf lets other party members risk themselves fighting, when Dolf could have protected them: @@ -85,7 +91,7 @@ Similarly, I think Yudkowsky should stop pretending to be our rationality teache I think it's significant that you don't see me picking fights with—say, Paul Christiano, because Paul Christiano doesn't repeatedly take a shit on my Something to Protect, because Paul Christiano isn't trying to be a religious leader. If Paul Christiano has opinions about transgenderism, we don't know about them. If we knew about them and they were correct, I would upvote them, and if we knew about them and they were incorrect, I would criticize them, but in either case, Christiano would not try to cultivate the impression that anyone who disagrees with him is insane. That's not his bag. ------- +### Decision Theory of Political Censorship Yudkowsky's political cowardice is arguably puzzling in light of his timeless decision theory's recommendations against giving in to extortion. @@ -160,7 +166,7 @@ So to me, the more damning question is this— If in the same position as Yudkowsky, would Sabbatai Zevi also declare that 30% of the ones with penises are actually women? ------ +### The Dolphin War (June 2021) In June 2021, MIRI Executive Director Nate Soares [wrote a Twitter thread aruging that](https://twitter.com/So8res/status/1401670792409014273) "[t]he definitional gynmastics required to believe that dolphins aren't fish are staggering", which [Yudkowsky retweeted](https://archive.is/Ecsca).[^not-endorsements] @@ -214,7 +220,7 @@ But in a world where more people are reading "... Not Man for the Categories" th After I cooled down, I did eventually write up the explanation for why paraphyletic categories are fine, in ["Blood Is Thicker Than Water"](https://www.lesswrong.com/posts/vhp2sW6iBhNJwqcwP/blood-is-thicker-than-water). But I'm not sure that anyone cared. --------- +### Pretender to the Caliphate I got a chance to talk to Yudkowsky in person at the 2021 Event Horizon[^event-horizon] Fourth of July party. In accordance with the privacy norms I'm adhering to while telling this Whole Dumb Story, I don't think I should elaborate on what was said. (It felt like a private conversation, even if most of it was outdoors at a party. No one joined in, and if anyone was listening, I didn't notice them.) @@ -252,6 +258,8 @@ An analogy: racist jokes are also just jokes. Irene says, "What's the difference Similarly, the "Caliphate" humor only makes sense in the first place in the context of a celebrity culture where deferring to Yudkowsky and Alexander is expected behavior, in a way that deferring to [Julia Galef](https://en.wikipedia.org/wiki/Julia_Galef) or [John S. Wentworth](https://www.lesswrong.com/users/johnswentworth) is not expected behavior. +### Replies to David Xu on Category Cruxes [working § title] + I don't think the motte-and-bailey concern is hypothetical. When I [indignantly protested](https://twitter.com/zackmdavis/status/1435059595228053505) the "we're both always right" remark, one David Xu [commented](https://twitter.com/davidxu90/status/1435106339550740482): "speaking as someone who's read and enjoyed your LW content, I do hope this isn't a sign that you're going full post-rat"—as if my criticism of Yudkowsky's self-serving bluster itself marked me as siding with the "post-rats"! Concerning my philosophy-of-language grievance, [Xu wrote](https://twitter.com/davidxu90/status/1436007025545125896) (with Yudkowsky ["endors[ing] everything [Xu] just said"](https://twitter.com/ESYudkowsky/status/1436025983522381827)): @@ -318,7 +326,7 @@ Here again, given the flexibility of natural language and the fact that the 2021 But _realistically_—how dumb do you think we are? I would expect someone who's not desperately fixated on splitting whatever hairs are necessary to protect the Caliphate's reputation to notice the obvious generalization from "sane individuals shouldn't hide from facts to save themselves psychological pain, because you need the facts to compute plans that achieve outcomes" to "sane societies shouldn't hide from concepts to save their members psychological pain, because we need concepts to compute plans that acheive outcomes." If Xu and Yudkowsky claim not to see it even after I've called their bluff, how dumb should _I_ think _they_ are? Let me know in the comments. ------- +### Secrets of the "Vassarties" (October 2021) In October 2021, Jessica Taylor [published a post about her experiences at MIRI](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe), making analogies between sketchy social pressures she had experienced in the core rationalist community (around short AI timelines, secrecy, deference to community leaders, _&c._) and those reported in [Zoe Cramer's recent account of her time at Leverage Research](https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b). @@ -374,7 +382,7 @@ I agreed that this was a real concern. (I had been so enamored with Yudkowsky's As an example of what I thought treading carefully but getting the goddamned right answer looked like, I was really proud of [my April 2020 review of Charles Murray's _Human Diversity_](/2020/Apr/book-review-human-diversity/). I definitely wasn't saying, Emil Kirkegaard-style, "the black/white IQ gap is genetic, anyone who denies this is a mind-killed idiot." Rather, _first_ I reviewed the Science in the book, and _then_ I talked about the politics surrounding Murray's reputation and the technical reasons for believing that the gap is real and partly genetic, and _then_ I went meta on the problem and explained why it makes sense that political forces make this hard to talk about. I thought this was how one goes about mapping the territory without being a moral monster with respect to one's pre-Dark Enlightenment morality. (And [Emil was satisfied, too](https://twitter.com/KirkegaardEmil/status/1425334398484983813).) ------- +### Recovering from the Personality Cult (September 2021–March 2022) At the end of the September 2021 Twitter altercation, I [said that I was upgrading my "mute" of @ESYudkowsky to a "block"](https://twitter.com/zackmdavis/status/1435468183268331525). Better to just leave, rather than continue to hang around in his mentions trying (consciously [or otherwise](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)) to pick fights, like a crazy ex-girlfriend. (["I have no underlying issues to address; I'm certifiably cute, and adorably obsessed"](https://www.youtube.com/watch?v=UMHz6FiRzS8) ...) @@ -434,7 +442,7 @@ Is that ... _not_ evidence of harm to the community? If that's not community-har ... or rather, "Reply, motherfucker", is what I fantasized about being able to say, if I hadn't already expressed an intention not to bother him anymore. ------- +### The Death With Dignity Era (April 2022) On 1 April 2022, Yudkowsky published ["MIRI Announces New 'Death With Dignity' Strategy"](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy), a cry of despair in the guise of an April Fool's Day post. MIRI didn't know how to align a superintelligence, no one else did either, but AI capabilities work was continuing apace. With no credible plan to avert almost-certain doom, the most we could do now was to strive to give the human race a more dignified death, as measured in log-odds of survival: an alignment effort that doubled the probability of a valuable future from 0.0001 to 0.0002 was worth one information-theoretic bit of dignity.