From 0079fb40a65633e7e2ad192e4f7667f915a5da33 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sun, 25 Sep 2022 22:41:18 -0700 Subject: [PATCH] check in --- ...-hill-of-validity-in-defense-of-meaning.md | 4 +-- notes/a-hill-email-review.md | 25 +++++++++++++++++++ notes/a-hill-of-validity-sections.md | 9 ++++--- 3 files changed, 32 insertions(+), 6 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index cf3e103..03cd742 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -519,7 +519,7 @@ Sarah asked if the math wasn't a bit overkill: were the calculations really nece "... Boundaries?" explains all this in the form of discourse with a hypothetical interlocutor arguing for the I-can-define-a-word-any-way-I-want position. In the hypothetical interlocutor's parts, I wove in verbatim quotes (without attribution) from Alexander ("an alternative categorization system is not an error, and borders are not objectively true or false") and Yudkowsky ("You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", "Using language in a way _you_ dislike is not lying. The propositions you claim false [...] is not what the [...] is meant to convey, and this is known to everyone involved; it is not a secret"), and Bensinger ("doesn't unambiguously refer to the thing you're trying to point at"). -My thinking here was that the posse's previous email campaigns had been doomed to failure by being too closely linked to the politically-contentious object-level topic which reputable people had strong incentives not to touch with a ten-foot pole. So if I wrote this post _just_ explaining what was wrong with the claims Yudkowsky and Alexander had made about the philosophy of language, with perfectly innocent examples about dolphins and job titles, that would remove the political barrier and [leave a line of retreat](https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat) for Yudkowsky to correct the philosophy of language error. And then if someone with a threatening social-justicey aura were to say, "Wait, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) +My thinking here was that the posse's previous email campaigns had been doomed to failure by being too closely linked to the politically-contentious object-level topic which reputable people had strong incentives not to touch with a ten-foot pole. So if I wrote this post _just_ explaining what was wrong with the claims Yudkowsky and Alexander had made about the philosophy of language, with perfectly innocent examples about dolphins and job titles, that would remove the political barrier and [leave a line of retreat](https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat) for Yudkowsky to correct the philosophy of language error. Then if someone with a threatening social-justicey aura were to say, "Wait, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) I could see a case that it was unfair of me to include subtext and then expect people to engage with the text, but if we weren't going to get into full-on gender-politics on _Less Wrong_ (which seemed like a bad idea), but gender politics _was_ motivating an epistemology error, I wasn't sure what else I was supposed to do! I was pretty constrained here! @@ -1089,7 +1089,7 @@ The Yudkowsky of 2007 starts by quoting a passage from George Orwell's _1984_, i The Yudkowsky of 2007 continues—it's again worth quoting at length— -> What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—_unhappy?_ Surely, _true_ wisdom would be _second-order_ rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness. +> What if self-deception helps us be happy? What if just running out and overcoming bias will make us—gasp!—_unhappy?_ Surely, _true_ wisdom would be _second-order_ rationality, choosing when to be rational. That way you can decide which cognitive biases should govern you, to maximize your happiness. > > Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen. > diff --git a/notes/a-hill-email-review.md b/notes/a-hill-email-review.md index b4e9cee..cd5894e 100644 --- a/notes/a-hill-email-review.md +++ b/notes/a-hill-email-review.md @@ -1272,3 +1272,28 @@ I don't think 'people who'd get surgery to have the ideal female body' cuts anyt Elena— "the original thing that already exists without having to try" sounds fake to me + + +I agree that the "SneerClub et al. hates us because they're evil bullies" hypothesis has a grain of truth to it, but stopping the analysis there seems ... _incredibly shallow and transparently self-serving_? + + +------ + +https://www.facebook.com/yudkowsky/posts/pfbid0WA95Anng7UZZEjqYv2aWC4LxZJU7KPvcRnxkTdmNJpH4PoQQgEFtqszPbCiCnqfil?comment_id=10159410429909228 + +If you listen to why _they_ say they hate us, it's because we're racist, sexist, transphobic fascists. The party-line response to seems to trend towards, "That's obviously false (Scott voted for Warren, look at all the social democrats on the LW/SSC serveys, &c.); they're just using that as a convenient smear because they like bullying nerds." (Fair paraphrase?) + +But ... the smears have a grain of truth to them, right? If "sexism" means "it's an empirical question whether innate statistical psychological sex differences of some magnitude exist, it empirically looks like they do, and this has implications about our social world", the "SSC et al. are crypto-sexists" charge is ABSOLUTELY CORRECT (e.g. https://slatestarcodex.com/.../contra-grant-on.../). (Crypto-racist, crypto-fascist, &c. are left as an exercise to the reader.) + +You could plead, "That's a bad definition of sexism", but that's only convincing if you've _already_ been trained in the "use empiricism and open discussion to discover policies with utilitarian-desirable outcomes" tradition; the people with a California-public-school-social-studies-plus-Tumblr education don't already _know_ that. (Source: I didn't know this at age 18 back in 'aught-six, and we didn't have Tumblr then.) + +In that light ... can you see why someone might find "blow the whistle on people who are claiming to be innocent but are actually guilty (of thinking bad thoughts)" to be a more compelling ethical consideration than "respect confidentiality requests"? The "debate ideas, not people" thing is a specific meta-ideological innovation, not baseline human morality! + +If our _actual_ problem is "Genuinely consistent rationalism is realistically always going to be an enemy of the state, because the map that fully reflects the territory is going to include facts that powerful coalitions would prefer to censor, no matter what specific ideology happens to be on top in a particular place and time (https://www.lesswrong.com/.../heads-i-win-tails-never...)", but we _think_ our problem is "We need to figure out how to exclude evil bullies", then we're in trouble!! + + +> We also have an inevitable Kolmogorov Option issue but that should not be confused with the inevitable Evil Bullies issue, even if bullies attack through Kolmogorov Option issues. + +Being transparent about the game theory I see: intuitively, it seems like I have a selfish incentive to "support" the bullies (by publicly pointing out that they have a point, as above) insofar as I'm directly personally harmed by my social network following a Kolmogorov Option strategy rather than an open-dissidence Free Speech for Shared Maps strategy, and more bullying might cause the network to switch strategies on "may as well be hung for a sheep as a lamb" grounds? Maybe I should explain this so people have a chance to talk me out of it? +... hm, acutally, when I try to formalize this with the simplest possible toy model, it doesn't work (the "may as well be hung ..." effect doesn't happen given the modeling assumptions I just made up). I was going to say: our team chooses a self-censorship parameter c from 0 to 10, and faces a bullying level b from 0 to 10. b is actually b(c, p), a function of self-censorship and publicity p (also from 0 to 10). The team leaders' utility function is U(c, b) := -(c + b) (bullying and self-censorship are both bad). Suppose the bullying level is b := 10 - c + p (self-censorship decreases bullying, and publicity increases it). +My thought was: a disgruntled team-member might want to increase p in order to induce the leaders to choose a smaller value of c. But when I do the algebra, -(c + b) = -(c + (10 - c + p)) = -c - 10 + c - p = -10 - p. (Which doesn't depend on c, seemingly implying that more publicity is just bad for the leaders without changing their choice of c? But I should really be doing my dayjob now instead of figuring out if I made a mistake in this Facebook comment.) diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 554a8d7..22b9429 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -9,11 +9,11 @@ _ weirdly hostile comments on "... Boundaries?" far editing tier— +_ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) _ rephrase "gamete size" discussion to make it clearer that Yudkowsky's proposal also implicitly requires people to be agree about the clustering thing _ smoother transition between "deliberately ambiguous" and "was playing dumb"; I'm not being paranoid for attributing political motives to him, because he told us that he's doing it _ I'm sure Eliezer Yudkowsky could think of some relevant differences _ clarify why Michael thought Scott was "gaslighting" me, include "beeseech bowels of Christ" -_ conversation with Ben about physical injuries (this is important because it explains where the "cut my dick off rhetoric" came from) _ address the "maybe it's good to be called names" point from "Hill" thread _ explain "court ruling" earlier _ 2019 Discord discourse with Alicorner @@ -550,6 +550,7 @@ I feel I've outlived myself https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4166378 Ben on Discursive Warfare and Faction Formation: https://docs.google.com/document/d/1dou43_aX_h1lP7-wqU_5jJq62PuhotQaybe5H2HUmWc/edit > What's not wrong on purpose is persuasive but does not become a factional identity. What becomes a factional identity is wrong on purpose. +https://www.lesswrong.com/posts/PG8i7ZiqLxthACaBi/do-fandoms-need-awfulness > Applying this to LessWrong: Plenty of people read the Sequences, improved their self-models and epistemic standards, and went on to do interesting things not particularly identified with LessWrong. Also, people formed an identity around Eliezer, the Sequences, and MIRI, which means that the community clustered around LessWrong is—aside from a few very confused people who until recently still thought it was about applying the lessons of the Sequences—committed not to Eliezer's insights but to exaggerated versions of his blind spots. @@ -720,7 +721,7 @@ https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with- > If those people went around lying to others and paternalistically deceiving them—well, mostly, I don't think they'll have really been the types to live inside reality themselves. But even imagining the contrary, good luck suddenly unwinding all those deceptions and getting other people to live inside reality with you, to coordinate on whatever suddenly needs to be done when hope appears, after you drove them outside reality before that point. Why should they believe anything you say? the Extropians post _explicitly_ says "may be a common sexual fantasy" -> So spending a week as a member of the opposite sex may be a common sexual fantasy, but I wouldn't count on being able to do this six seconds after the Singularity. I would not be surprised to find that it took three subjective centuries before anyone had grown far enough to attempt a gender switch. +> So spending a week as a member of the opposite sex may be a common sexual fantasy, but I wouldn't count on being able to do this six seconds after the Singularity. I would not be surprised to find that it took three subjective centuries before anyone had grown far enough to attempt a gender switch. ------ @@ -736,7 +737,7 @@ If you listen to the sorts of things the guy says lately, it looks like he's jus > > [...too many people think](https://twitter.com/ESYudkowsky/status/1509944888376188929) it's unvirtuous to shut up and listen to me, and they might fall for it. I'd wish that I'd never spoken on the topic, and just told them to vote in elections for reasons they'd understand when they're older. That said, enjoy your $1 in Ultimatum games. -Notwithstanding that there are reasons for him to be traumatized over how some people have misinterpreted timeless decision theory—what a _profoundly_ anti-intellectual statement! I calim that this is just not something you would ever say if you cared about having a rationality community that could process arguments and correct errors, rather than a robot cult to suck you off. +Notwithstanding that there are reasons for him to be traumatized over how some people have misinterpreted timeless decision theory—what a _profoundly_ anti-intellectual statement! I claim that this is just not something you would ever say if you cared about having a rationality community that could process arguments and correct errors, rather than a robot cult to suck you off. To be clear, there _is_ such a thing as legitimately trusting an authority who knows better than you. For example, [the Sequences tell of how Yudkowsky once wrote to Judea Pearl](https://www.lesswrong.com/posts/tKa9Lebyebf6a7P2o/the-rhythm-of-disagreement) to correct an apparent error in _Causality: Models, Reasoning, and Inference_. Pearl agreed that there was an error, but said that Yudkowsky's proposed correction was also wrong, and provided the real correction. Yudkowsky didn't understand the real correction, but trusted that Pearl was right, because Pearl was the authority who had invented the subject matter—it didn't seem likely that he would get it wrong _again_ after the original error had been brought to his attention. @@ -838,7 +839,7 @@ and Keltham tells Carissa (null action pg 39) to keep the Light alive as long as > It, it—the fe—it, flame—flames. Flames—on the side of my face. Breathing—breathl—heaving breaths, heaving— -like a crazy ex-girlfriend (I have no underlying issues to address; I'm certifiably cute, and adorably obsessed) +like a crazy ex-girlfriend (["I have no underlying issues to address / I'm certifiably cute, and adorably obsessed"](https://www.youtube.com/watch?v=UMHz6FiRzS8)) But he is willing to go to bat for killing babies, but not for "Biological Sex is Actually Real Even If That Hurts Your Feelings" https://mobile.twitter.com/AuronMacintyre/status/1547562974927134732 -- 2.17.1