From e3cbb6d35838cfb47e2880031ddf1021b9529d23 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Fri, 14 Oct 2022 10:40:50 -0700 Subject: [PATCH] check in --- ...-hill-of-validity-in-defense-of-meaning.md | 22 +++++++++---------- .../drafts/book-review-johnny-the-walrus.md | 4 ++-- ...unnerby-and-the-shallowness-of-progress.md | 2 +- notes/a-hill-of-validity-sections.md | 9 ++++++-- notes/post_ideas.txt | 2 +- 5 files changed, 22 insertions(+), 17 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 522c8d4..4b6a175 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -36,9 +36,7 @@ Of course, a pretty good job of explaining by one niche blogger wasn't going to ... and, really, that _should_ have been the end of the story. Not much of a story at all. If I hadn't been further provoked, I would have still kept up this blog, and I still would have ended up arguing about gender with people occasionally, but this personal obsession of mine wouldn't have been the occasion of a full-on robot-cult religious civil war involving other people who had much more important things to do with their time. -The _causis belli_ for the religious civil war happened on 28 November 2018. I was at my new dayjob's company offsite event in Austin. Coincidentally, I had already spent much of the afternoon arguing trans issues with other "rationalists" on Discord. - -[TODO: review Discord logs; email to Dad suggests that offsite began on the 26th, contrasted to first shots on the 28th] +The _causis belli_ for the religious civil war happened on 28 November 2018. I was at my new dayjob's company offsite event in Austin. Coincidentally, I had already spent much of the previous two days (since just before the plane to Austin took off) arguing trans issues with other "rationalists" on Discord. Just that month, I had started a Twitter account in my own name, inspired in an odd way by the suffocating [wokeness of the open-source software scene](/2018/Oct/sticker-prices/) where I [occasionally contributed diagnostics patches to the compiler](https://github.com/rust-lang/rust/commits?author=zackmdavis). My secret plan/fantasy was to get more famous and established in the that world (one of compiler team membership, or conference talk accepted, preferably both), get some corresponding Twitter followers, and _then_ bust out the [@BlanchardPhd](https://twitter.com/BlanchardPhD) retweets and links to this blog. In the median case, absolutely nothing would happen (probably because I failed at being famous), but I saw an interesting tail of scenarios in which I'd get to be a test case in [the Code of Conduct wars](https://techcrunch.com/2016/03/05/how-we-may-mesh/). @@ -579,9 +577,9 @@ _I_ had been hyperfocused on prosecuting my Category War, but the reason Michael Ben had [previously](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [written](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) a lot [about](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) [problems](http://benjaminrosshoffman.com/against-responsibility/) [with](http://benjaminrosshoffman.com/against-neglectedness/) Effective Altruism. Jessica had had a bad time at MIRI, as she had told me back in March, and would [later](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards). To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? -I believed that there _was_ a real problem, but didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better. It's not as simple as shouting, "Hey guys, Truth matters!"—any ideologue or religious person would agree with _that_. +I believed that there _was_ a real problem, but didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!"—any ideologue or religious person would agree with _that_. -Ben called it the Blight, after the rogue superintelligence in _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it's that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world. +Ben called it the Blight, after the rogue superintelligence in _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it's that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world, using and construing claims about facts as moves in a power game, albeit sometimes subject to genre constraints where only true facts could be used. When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified to Ben that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified." @@ -619,7 +617,7 @@ These two datapoints led me to a psychological hypothesis (which was maybe "obvi ---- -I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" personal-narrative Diary-like post. Ben said that the target audience to aim for was people like I was a few years ago, who hadn't yet had the experiences I had—so they wouldn't have to freak out to the point of being imprisoned and demand help from community leaders and not get it; they could just learn from me. That is, the actual sympathetic-but-naïve people could learn. Not the people messing with me. +I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" personal-narrative Diary-like post. Ben said that the target audience to aim for was people like I was a few years ago, who hadn't yet had the experiences I had—so they wouldn't have to freak out to the point of being imprisoned and demand help from community leaders and not get it; they could just learn from me. That is, the actual sympathetic-but-naïve people could learn. Not the people messing with me. I didn't know how to finish it. I was too psychologically constrained; I didn't know how to tell the Whole Dumb Story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. @@ -880,7 +878,7 @@ If we're talking about overt _gender role enforcement attempts_—things like, " (There are [historical reasons for the buckets to exist](/2020/Jan/book-review-the-origins-of-unfairness/), but I'm eager to bet on modern Society being rich enough and smart enough to either forgo the buckets, or at least let people opt-out of the default buckets, without causing too much trouble.) -But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.5; your expectation that I be able to toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to recommend I get a prostate cancer screening on account of my being male and therefore at risk for prostate cancer, it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that. +But importantly, my support for people not wanting to be tossed into gender role buckets is predicated on their reasons for not wanting that _having genuine merit_—things like "The fact that I'm a juvenile female human doesn't mean I'll have a husband; I'm actually planning to become a nun", or "The sex difference in Big Five Neuroticism is only _d_ ≈ 0.4; your expectation that I be able to toughen up is not reasonable given the information you have about me in particular, even if most adult human males are tougher than me". I _don't_ think people have a _general_ right to prevent others from using sex categories to make inferences or decisions about them, _because that would be crazy_. If a doctor were to recommend I get a prostate cancer screening on account of my being male and therefore at risk for prostate cancer, it would be _bonkers_ for me to reply that I don't like being tossed into a Male Bucket like that. While piously appealing to the feelings of people describing reasons they do not want to be tossed into a Male Bucket or a Female Bucket, Yudkowsky does not seem to be distinguishing between reasons that have merit, and reasons that do not have merit. The post continues (bolding mine): @@ -898,7 +896,9 @@ But ... "plan to get hair surgery"? "Would get hair surgery if it were safer but Is the idea that we want to use the same pronouns for the same person over time, so that if we know someone is going to get hair surgery—they have an appointment with the hair surgeon at this-and-such date—we can go ahead and switch their pronouns in advance? Okay, I can buy that. -But extending that to the "would get hair surgery if it were safer" case is _absurd_. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive for insisting on complication and nuance is as an obfuscation tactic—unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? +But extending that to the "would get hair surgery if it were safer" case is _absurd_. No one treats _conditional plans assuming speculative future advances in medical technology_ the same as actual plans. I don't think this case calls for any complicated nuanced position, and I don't see why Eliezer Yudkowsky would suggest that it would, unless the real motive for insisting on complication and nuance is as an obfuscation tactic— + +Unless, at some level, Eliezer Yudkowsky doesn't expect his followers to deal with facts? Maybe the problem is easier to see in the context of a non-gender example. [My previous hopeless ideological war—before this one—was against the conflation of _schooling_ and _education_](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/): I _hated_ being tossed into the Student Bucket, as it would be assigned by my school course transcript, or perhaps at all. @@ -1153,9 +1153,9 @@ But I don't, think that everybody knows. And I'm not, giving up that easily. Not Yudkowsky [defends his behavior](https://twitter.com/ESYudkowsky/status/1356812143849394176): -> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle... -> -> ...just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. +> I think that some people model civilization as being in the middle of a great battle in which this tweet, even if true, is giving comfort to the Wrong Side, where I would not have been as willing to tweet a truth helping the Right Side. From my perspective, this battle just isn't that close to the top of my priority list. I rated nudging the cognition of the people-I-usually-respect, closer to sanity, as more important; who knows, those people might matter for AGI someday. And the Wrong Side part isn't as clear to me either. + +[TODO: first of all, "A Rational Arugment" is very explicit about "not have been as willing to Tweet a truth helping the side" meaning you've crossed the line; second of all, it's if anything more plausible that trans women will matter to AGI, as I pointed out in my email] But the battle that matters—the battle with a Right Side and a Wrong Side—isn't "pro-trans" _vs._ "anti-trans". (The central tendency of the contemporary trans rights movement is firmly on the Wrong Side, but that's not the same thing as all trans people as individuals.) That's why Jessica joined our posse to try to argue with Yudkowsky in early 2019. (She wouldn't have, if my objection had been, "trans is fake; trans people Bad".) That's why Somni—one of the trans women who [infamously protested the 2019 CfAR reunion](https://www.ksro.com/2019/11/18/new-details-in-arrests-of-masked-camp-meeker-protesters/) for (among other things) CfAR allegedly discriminating against trans women—[understands what I've been saying](https://somnilogical.tumblr.com/post/189782657699/legally-blind). diff --git a/content/drafts/book-review-johnny-the-walrus.md b/content/drafts/book-review-johnny-the-walrus.md index 9f7e665..bc0f600 100644 --- a/content/drafts/book-review-johnny-the-walrus.md +++ b/content/drafts/book-review-johnny-the-walrus.md @@ -4,6 +4,6 @@ Category: commentary Tags: natalism, review (book) Status: draft -This is a terrible children's book that could have been great if the author could have just _pretended to be subtle_. Our protagonist, Johnny, is a kid who loves to play make-believe. One day, he pretends to be a walrus, fashioning "tusks" for himself with wooden spoons, and "flippers" from socks. Unfortunately, Johnny's mother takes him literally: she has him put on gray makeup, gives him worms to eat, and takes him to the zoo to be with the "other" walruses. +This is a terrible children's book that could have been great if the author could have just [_pretended to be subtle_](/tag/deniably-allegorical/). Our protagonist, Johnny, is a kid who loves to play make-believe. One day, he pretends to be a walrus, fashioning "tusks" for himself with wooden spoons, and "flippers" from socks. Unfortunately, Johnny's mother takes him literally: she has him put on gray makeup, gives him worms to eat, and takes him to the zoo to be with the "other" walruses. Uh-oh! Will Johnny have to live as a "walrus" forever? -With competent execution, this could be a great children's book! The premise is not realistic—no sane parent would conclude their child is _literally_ a walrus _because he said so_—but it's a kind of non-realism common in children's literature, attributing simple, caricatured motivations to characters in order to tell a silly, memorable story. +With competent execution, this could be a great children's book! The premise is not realistic—no sane parent would conclude their child is _literally_ a walrus _because he said so_—but it's a kind of non-realism common in children's literature, attributing simple, caricatured motivations to characters in order to tell a silly, memorable story. If there happens to be an obvious analogy between the silly, memorable story and an ideological fad affecting otherwise-sane parents in the current year ... diff --git a/content/drafts/hrunkner-unnerby-and-the-shallowness-of-progress.md b/content/drafts/hrunkner-unnerby-and-the-shallowness-of-progress.md index dcd973d..32dfa4f 100644 --- a/content/drafts/hrunkner-unnerby-and-the-shallowness-of-progress.md +++ b/content/drafts/hrunkner-unnerby-and-the-shallowness-of-progress.md @@ -4,7 +4,7 @@ Category: commentary Tags: literary critcism, review (book), deniably allegorical Status: draft -(**SPOILERS** for _A Deepness in the Sky_) +(**SPOILERS** for _A Deepness in the Sky_ by Vernor Vinge) Apropos of absolutely nothing—and would I lie to you about that?!—I've been thinking a lot lately about Hrunkner Unnerby, one of the characters in the ["B"](https://tvtropes.org/pmwiki/pmwiki.php/Main/PlotThreads) [story](https://tvtropes.org/pmwiki/pmwiki.php/Main/TwoLinesNoWaiting) of Vernor Vinge's _A Deepness in the Sky_. diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index e771d54..4b10acd 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,10 +1,14 @@ -with internet available— +With internet available— +_ debate with Benquo and Jessicata +_ Prudentbot § from "Robust Cooperation" paper +_ compile Categories references from the Dolphin War Twitter thread +_ tussle with Ruby on "Causal vs. Social Reality" _ when did I ask Leon about getting easier tasks? _ examples of snarky comments about "the rationalists" _ 13th century word meanings -_ compile Categories references from the Dolphin War Twitter thread _ weirdly hostile comments on "... Boundaries?" _ more examples of Yudkowsky's arrogance +_ dath ilan conspiracy references far editing tier— @@ -1400,3 +1404,4 @@ https://discord.com/channels/401181628015050773/458329253595840522/5167446460349 26 November 14:38 p.m. > I'm not sure what "it's okay to not pursue any medical transition options while still not identifying with your asab" is supposed to mean if it doesn't cash out to "it's okay to enforce social norms preventing other people from admitting out loud that they have correctly noticed your biological sex" +In contrast to Yudkowsky's claim that you need to have invented something from scratch to make any real progress, this is a case where the people who _did_ invent something can't apply it anymore!! diff --git a/notes/post_ideas.txt b/notes/post_ideas.txt index 06239db..23f5dc5 100644 --- a/notes/post_ideas.txt +++ b/notes/post_ideas.txt @@ -4,6 +4,7 @@ _ Interlude XXII Urgent/needed for healing— _ Reply to Scott Alexander on Autogenderphilia _ Pseudonym Misgivings +_ Hrunkner Unnerby and the Shallowness of Progress _ Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer _ A Hill of Validity in Defense of Meaning @@ -17,7 +18,6 @@ _ Happy Meal _ Elision _vs_. Choice _ Book Review: Johnny the Walrus _ Comments on the Conspiracies of dath ilan -_ Hrunkner Unnerby and the Shallowness of Progress _ Beckett Mariner Is Trans https://www.reddit.com/r/DaystromInstitute/comments/in3g92/was_mariner_a_teenager_on_the_enterprised/ _ Link: "On Transitions, Freedom of Form, [...]" -- 2.17.1