X-Git-Url: http://unremediatedgender.space/source?a=blobdiff_plain;f=content%2Fdrafts%2Fcomments-on-the-conspiracies-of-dath-ilan.md;h=cb2a17be374d001c6a61b079d6aab0950ffce425;hb=291c0469bf251e9d42904eba0fc05ad20898a44c;hp=aac38ad8ed63c34c9b237e7a620f141e78d2220f;hpb=997b44a319cd7b0771744065c7dd9e445c06385f;p=Ultimately_Untrue_Thought.git diff --git a/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md b/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md index aac38ad..cb2a17b 100644 --- a/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md +++ b/content/drafts/comments-on-the-conspiracies-of-dath-ilan.md @@ -4,22 +4,16 @@ Category: commentary Tags: Eliezer Yudkowsky, worldbuilding Status: draft -If we believe that IQ research validates the "Jews are clever" stereotype, I wonder if there's a distinct (albeit probably correlated) "enjoying deception" trait that validates the "Jews are sneaky" stereotype? - "Natural History of Ashkenazi Intelligence" -https://web.mit.edu/fustflum/documents/papers/AshkenaziIQ.jbiosocsci.pdf (I was tempted to tag that as "epistemic status: low-confidence speculation", but that's _frequentist_ thinking—as if "Jews and gentiles are equally sneaky" were a "null hypothesis" that could only be rejected by data that would be sufficiently unlikely assuming that the null was true. Ha ha, that would be _crazy!_ Obviously, I should have a _prior_ on the effect size difference between the Jew and gentile sneakiness distributions, that can be updated as sneakiness data comes in. I think the mean of my prior distribution is at, like, _d_ ≈ 0.1? So it's not "low confidence"; it's "low confidence of the effect size being large enough to be of much practical significance".) -Anyway, if dath ilan is very high in the sneakiness trait (relative to Earth), that would help explain all the conspiracies! - -Not-actually-plausible conspiracies that everyone is in on (like "Sparashki are real") are a superstimulus like zero-calorie sweetener: engineered to let everyone enjoy the thrill of lying, without doing any real damage to shared maps. For context on why I have no sense of humor about this, on Earth (which _actually exists_, unlike dath ilan), when someone says "it's not lying, because no one _expected_ me to tell the truth in that situation", what's usually going on (as Zvi Mowshowitz explains: ) is that is that conspirators benefit from deceiving outsiders, and the claim that "everyone knows" is them lying to _themselves_ about the fact that they're lying. (If _you_ got hurt by not knowing, well, it's not like anyone got hurt, because if you didn't know, then you weren't anyone.) -It's very striking to me that one of the corrupt executives in _Moral Mazes_ uses very similar language to the narrator of the Merrin thread: "We lie all the time, but if everyone knows that we're lying, is a lie really a lie?" + Okay, but if it were _actually true_ that everyone knew, what would be _function_ of saying the false thing? On dath ilan (if not in Earth boardrooms), I suppose the answer is "Because it's fun"? Okay, but what is the function of your brain giving out a "fun" reward in this context? It seems like at _some_ point, there has to be the expectation of _some_ cognitive system (although possibly not an entire "person") taking the signals literally. @@ -27,6 +21,7 @@ That's why, when I _notice_ myself misrepresenting my actual beliefs or motivati But maybe dath ilan is sufficiently good at achieving common knowledge in large groups that they _can_ pull off a zero-calorie "everyone knows" conspiracy without damaging shared maps?? + I'm still skeptical, especially given that we see them narratizing it as "not lying" (in the same words that corrupt executives on Earth use!), rather than _explicitly_ laying out the evopysch logic of sneakiness superstimuli, and the case that they know how to pull it off in a zero-calorie (trivial damage to shared maps) way. In general, I think that "it's not lying because no one expected the truth" is something you would say as part of an attempted nearest-unblocked-strategy end run around a deontological constraint against "lying" (https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly); I don't think it's something you would say if you _actually cared_ about shared maps being accurate