From 94a4ec2323ab0cebd5c7e3648ea46fe29d973713 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sun, 19 Feb 2023 19:29:55 -0800 Subject: [PATCH] check in --- ...xhibit-generally-rationalist-principles.md | 4 ++-- content/drafts/standing-under-the-same-sky.md | 14 +++++------ notes/memoir-sections.md | 23 +++++++++++-------- 3 files changed, 23 insertions(+), 18 deletions(-) diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index 76677bc..afdd763 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -477,7 +477,7 @@ There are a number of things that could be said to this,[^number-of-things] but [^number-of-things]: Note the striking contrast between ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument), in which the Yudkowsky of 2007 wrote that a campaign manager "crossed the line [between rationality and rationalization] at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it"; and these 2021 Tweets, in which Yudkowsky seems completely nonchalant about "not have been as willing to tweet a truth helping" one side of a cultural dispute, because "this battle just isn't that close to the top of [his] priority list". Well, sure! Any hired campaign manager could say the same: helping the electorate make an optimally informed decision just isn't that close to the top of their priority list, compared to getting paid. - Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems incredibly dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors on both political sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to proportionately _more_ errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics! + Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems incredibly dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors on both political sides. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to make proportionately _more_ errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics! The battle that matters—and I've been _very_ explicit about this, for years—is over this proposition eloquently [stated by Scott Alexander in November 2014](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (redacting the irrelevant object-level example): @@ -503,7 +503,7 @@ If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, _ Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a good teacher.) -["Never go in against Eliezer Yudkowsky when anything is on the line."](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to. +["Never go in against Eliezer Yudkowsky when anything is on the line"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to. [Yudkowsky writes](https://twitter.com/ESYudkowsky/status/1096769579362115584): diff --git a/content/drafts/standing-under-the-same-sky.md b/content/drafts/standing-under-the-same-sky.md index e105949..8164c45 100644 --- a/content/drafts/standing-under-the-same-sky.md +++ b/content/drafts/standing-under-the-same-sky.md @@ -105,7 +105,7 @@ Okay, but then how do I compute this "subjunctive dependence" thing? Presumably I don't know—and if I don't know, I can't say that the relevant subjunctive dependence obviously pertains in the real-life science intellectual _vs._ social justice mob match-up. If the mob has been trained from past experience to predict that their targets will give in, should you defy them now in order to somehow make your current predicament "less real"? Depending on the correct theory of logical counterfactuals, the correct stance might be "We don't negotiate with terrorists, but [we do appease bears](/2019/Dec/political-science-epigrams/) and avoid avalanches" (because neither the bear's nor the avalanche's behavior is calculated based on our response), and the forces of political orthodoxy might be relevantly bear- or avalanche-like. -On the other hand, the relevant subjunctive dependence doesn't obviously _not_ pertain, either! Yudkowsky does seem to endorse commonsense pattern-matching to "extortion" in contexts like nuclear diplomacy. Or I remember back in 'aught-nine, Tyler Emerson was caught embezzling funds from the Singularity Institute, and SingInst made it a point of pride to prosecute on decision-theoretic grounds, when a lot of other nonprofits would have quietly and causal-decision-theoretically covered it up to spare themselves the embarrassment. Parsing social justice as an agentic "threat" rather than a non-agentic obstacle like an avalanche, does seem to line up with the fact that people punish heretics (who dissent from an ideological group) more than infidels (who were never part of the group to begin with), _because_ heretics are more extortable—more vulnerable to social punishment from the original group. +On the other hand, the relevant subjunctive dependence doesn't obviously _not_ pertain, either! Yudkowsky does seem to endorse commonsense pattern-matching to "extortion" in contexts [like nuclear diplomacy](https://twitter.com/ESYudkowsky/status/1580278376673120256). Or I remember back in 'aught-nine, Tyler Emerson was caught embezzling funds from the Singularity Institute, and SingInst made it a point of pride to prosecute on decision-theoretic grounds, when a lot of other nonprofits would have quietly and causal-decision-theoretically covered it up to spare themselves the embarrassment. Parsing social justice as an agentic "threat" rather than a non-agentic obstacle like an avalanche, does seem to line up with the fact that people punish heretics (who dissent from an ideological group) more than infidels (who were never part of the group to begin with), _because_ heretics are more extortable—more vulnerable to social punishment from the original group. Which brings me to the second reason the naïve anti-extortion argument might fail: [what counts as "extortion" depends on the relevant "property rights", what the "default" action is](https://www.lesswrong.com/posts/Qjaaux3XnLBwomuNK/countess-and-baron-attempt-to-define-blackmail-fail). If having free speech is the default, being excluded from the dominant coalition for defying the orthodoxy could be construed as extortion. But if _being excluded from the coalition_ is the default, maybe toeing the line of orthodoxy is the price you need to pay in order to be included. @@ -130,17 +130,17 @@ I'd had a Twitter exchange with Yudkowsky in January 2020 that revealed some of (The language of the latter being [a reference to Yudkowsky's _Inadequate Equilibria_](https://equilibriabook.com/molochs-toolbox/).) -Yudkowsky quote-Tweet dunked on me: +Yudkowsky [quote-Tweet dunked on me](https://twitter.com/ESYudkowsky/status/1216788984367419392): -> [TODO: well, YES] +> Well, YES. Paying taxes to the organization that runs ICE, or voting for whichever politician runs against Trump, or trading with a doctor benefiting from an occupational licensing regime; these acts would all be great evils if you weren't trapped. -I pointed out the voting case as one where he seemed to be disagreeing with his past self, linking to 2008's "Stop Voting for Nincompoops". What changed his mind? +I pointed out the voting case as one where he seemed to be disagreeing with his past self, linking to 2008's ["Stop Voting for Nincompoops"](https://www.lesswrong.com/posts/k5qPoHFgjyxtvYsm7/stop-voting-for-nincompoops). What changed his mind? "Improved model of the social climate where revolutions are much less startable or controllable by good actors," he said. "Having spent more time chewing on Nash equilibria, and realizing that the trap is _real_ and can't be defied away even if it's very unpleasant." -In response to Sarah Constantin mentioning that there was no personal cost to voting third-party, Yudkowsky pointed out that the problem was the third-party spoiler effect, not personal cost: "People who refused to vote for Hillary didn't pay the price, kids in cages did, but that still makes the action nonbest." +In response to Sarah Constantin mentioning that there was no personal cost to voting third-party, Yudkowsky [pointed out that](https://twitter.com/ESYudkowsky/status/1216809977144168448) the problem was the [third-party spoiler effect](https://en.wikipedia.org/wiki/Vote_splitting), not personal cost: "People who refused to vote for Hillary didn't pay the price, kids in cages did, but that still makes the action nonbest." -[TODO: look up the extent to which "kids in cages" were also a thing during the Obama and Biden administrations] +(The "cages" in question—technically, chain-link fence enclosures—were [actually](https://www.usatoday.com/story/news/factcheck/2020/08/26/fact-check-obama-administration-built-migrant-cages-meme-true/3413683001/) [built](https://apnews.com/article/election-2020-democratic-national-convention-ap-fact-check-immigration-politics-2663c84832a13cdd7a8233becfc7a5f3) during the Obama administration, but that doesn't seem important.) I asked what was wrong with the disjunction from "Stop Voting for Nincompoops", where the earlier Yudkowsky had written that it's hard to see who should accept the argument to vote for the lesser of two evils, but refuse to accept the argument against voting because it won't make a difference. Unilaterally voting for Clinton doesn't save the kids! @@ -148,7 +148,7 @@ I asked what was wrong with the disjunction from "Stop Voting for Nincompoops", "How do I compute whether I'm in a large enough decision-theoretic cohort?" I asked. Did we know that, or was that still on the open problems list? -Yudkowsky said that he traded his vote for a Clinton swing state vote, partially hoping that that would scale, "but maybe to a larger degree because [he] anticipated being asked in the future if [he'd] acted against Trump". +Yudkowsky said that he [traded his vote for a Clinton swing state vote](https://en.wikipedia.org/wiki/Vote_pairing_in_the_2016_United_States_presidential_election), partially hoping that that would scale, "but maybe to a larger degree because [he] anticipated being asked in the future if [he'd] acted against Trump". The reputational argument seems in line with Yudkowsky's [pathological obsession with not-technically-lying](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). People asking if you acted against Trump are looking for a signal of coalitional loyalty. By telling them he traded his vote, Yudkowsky can pass their test without lying. diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 7277b24..77b1896 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1,5 +1,4 @@ marked TODO blocks— -- social justice and defying threats [pt. 6] - Boston [pt. 6] - Jessica's experience at MIRI and CfAR [pt. 6] _ last email and not bothering him [pt. 6] @@ -49,15 +48,8 @@ _ the story of my Feb./Apr. 2017 recent madness [pt. 2] it was actually "wander onto the AGI mailing list wanting to build a really big semantic net" (https://www.lesswrong.com/posts/9HGR5qatMGoz4GhKj/above-average-ai-scientists) With internet available— -_ standard name/link for 3rd party spoiler effect -_ kids in cages -_ "Stop Voting for Nincompoops" -_ "well, YES" Nash quote-dunk and reply links, and archive.is -_ archive.is https://somenuanceplease.substack.com/p/actually-i-was-just-crazy-the-whole _ screenshot other Facebook comments -_ nuclear diplomacy Twitter link _ footnote "said that he wishes he'd never published" -_ Astronomical Waste _ hate-warp tag _ replace "Oh man oh jeez" Rick & Morty link _ Nevada bona fides @@ -90,6 +82,9 @@ _ Anna's claim that Scott was a target specifically because he was good, my coun _ Yudkowsky's LW moderation policy far editing tier— +_ Yudkowsky's "Is someone trolling?" comment +_ "typographical attack surface" isn't clear +_ voting reputation section is weak, needs revision _ edit "still published under a pseudonym" remark in "A Hill" _ incoprorate "downvote Eliezer in their head" remark from Jessica's memoir _ explain the "if the world were at stake" Sword of Good reference better @@ -2137,7 +2132,6 @@ An analogy between my grievance against Yudkowsky and Duncan's grievance against https://equilibriabook.com/molochs-toolbox/ - > All of her fellow employees are vigorously maintaining to anybody outside the hospital itself, should the question arise, that Merrin has always cosplayed as a Sparashki while on duty, in fact nobody's ever seen her out of costume; sure it's a little odd, but lots of people are a little odd. > > (This is not considered a lie, in that it would be universally understood and expected that no one in this social circumstance would tell the truth.) @@ -2147,3 +2141,14 @@ I still had Sasha's sleep mask "Wilhelm" and Steven Kaas aren't Jewish, I think I agree that Earth is mired in random junk that caught on (like p-values), but ... so are the rats + +I'm https://www.lesswrong.com/posts/XvN2QQpKTuEzgkZHY/?commentId=f8Gour23gShoSyg8g at gender and categorization + +picking cherries from a cherry tree + +http://benjaminrosshoffman.com/honesty-and-perjury/#Intent_to_inform + +https://astralcodexten.substack.com/p/trying-again-on-fideism +> I come back to this example less often, because it could get me in trouble, but when people do formal anonymous surveys of IQ scientists, they find that most of them believe different races have different IQs and that a substantial portion of the difference is genetic. I don’t think most New York Times readers would identify this as the scientific consensus. So either the surveys - which are pretty official and published in peer-reviewed journals - have managed to compellingly misrepresent expert consensus, or the impressions people get from the media have, or "expert consensus" is extremely variable and complicated and can’t be reflected by a single number or position. + +https://nickbostrom.com/astronomical/waste \ No newline at end of file -- 2.17.1