From f7a7301e525fddfe09472d879ab4a8a00e7a2167 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sat, 7 Dec 2019 08:52:04 -0800 Subject: [PATCH] "I Tell Myself" 7 December drafting session 1: blegg networks --- ...r-proton-things-tend-to-come-in-varieties.md | 4 ++-- notes/i-tell-myself-notes.txt | 6 ++++-- notes/i-tell-myself-sections.md | 17 +++++++++-------- 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/content/drafts/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties.md b/content/drafts/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties.md index 9bb9df8..57e61a0 100644 --- a/content/drafts/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties.md +++ b/content/drafts/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties.md @@ -22,7 +22,7 @@ There's a ["zero–one–infinity"](https://en.wikipedia.org/wiki/Zero_one_infin The one comes to us and says, "Everything more complicated than protons tends to come in varieties. Tentacular brachitis involves more than one proton and will probably have varieties." -This, in itself, doesn't tell us anything useful about what those varieties might be ... but suppose we do some more research and indeed find that patients' tentacles have a distinct [cluster structure](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace). Not only is there [covariance](https://en.wikipedia.org/wiki/Covariance) between different tentacle features (perhaps tentacles that are a darker shade of blue also tend to be slimier), but the joint color–sliminess distribution is starkly bimodal: modeling the tentacles as coming from two distinct "dark-blue/slimy" and "light-blue/less-slimy" taxa is a better statistical fit than positing a linear darkness/sliminesss continuum. So, congratulating ourselves on a scientific job-well-done, we speciate our diagnosis into two: "Tentacular brachitis A" and "Tentacular brachitis B". +This, in itself, doesn't tell us anything useful about what those varieties might be ... but suppose we do some more research and indeed find that patients' tentacles have a distinct [cluster structure](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace). Not only is there [covariance](https://en.wikipedia.org/wiki/Covariance) between different tentacle features (perhaps tentacles that are a darker shade of blue also tend to be slimier), but the color–sliminess joint distribution is starkly bimodal: modeling the tentacles as coming from two distinct "dark-blue/slimy" and "light-blue/less-slimy" taxa is a better statistical fit than positing a linear darkness/sliminesss continuum. So, congratulating ourselves on a scientific job-well-done, we speciate our diagnosis into two: "Tentacular brachitis A" and "Tentacular brachitis B". The one comes back to us and says, "Everything more complicated than protons tends to come in varieties. Tentacular brachitis A involves more than one proton and will probably have varieties." @@ -30,7 +30,7 @@ You see the problem. We have an infinite regress: the argument that the original So isn't "Gender dysphoria involves more than one proton[; therefore, it] will probably have varieties" a [fake explanation](https://www.lesswrong.com/posts/fysgqk4CjAwhBgNYT/fake-explanations)? The phrase "gender dysphoria" was worth inventing as a [shorter code](http://yudkowsky.net/rational/technical/) for the not-vanishingly-rare observation of "humans wanting to change sex", but unless and until you have specific observations indicating that there are meaningfully different ways dysphoria can manifest, you shouldn't posit that there are "probably" multiple varieties, because in a ["nearby" Everett branch](https://www.lesswrong.com/posts/9cgBF6BQ2TRB3Hy4E/and-the-winner-is-many-worlds) where human evolution happened slightly differently, there probably _aren't_: brain-intersex conditions have a kind of _a priori_ plausibility to them, but whatever weird quirk leads to autogynephilia probably wouldn't happen with every roll of the evolutionary dice if you rewound far enough, and the memeplex driving Littman's ROGD observations was invented recently. -So I think a better moral than "Things larger than protons will probably have varieties" would be "Beware [fallacies of compression](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)." The advice to be alert to the _possibility_ that your initial category should be split into multiple subspecies is correct and important and well-taken, but the _reason_ [... TODO bridge] not _because things are made of atoms_. +So I think a better moral than "Things larger than protons will probably have varieties" would be "Beware [fallacies of compression](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression)." The advice to be alert to the _possibility_ that your initial category should be split into multiple subspecies is correct and important and well-taken, but the _reason_ [... TODO bridge] not _because things are made of atoms_! At this point, some readers might be thinking, "Wait a minute, M. Taylor! Didn't you notice that part about 'There's an allegation that people are reluctant to speciate more than one kind of gender dysphoria'? That's _your_ hobbyhorse! Even if Yudkowsky doesn't know you exist, by publicly offering a _general_ argument that there are multiple types of dyphoria, he's effectively _doing your cause a favor_—and here you are _criticizing_ him for it! Isn't that disloyal and ungrateful of you?" diff --git a/notes/i-tell-myself-notes.txt b/notes/i-tell-myself-notes.txt index d53ddf6..ad7aae9 100644 --- a/notes/i-tell-myself-notes.txt +++ b/notes/i-tell-myself-notes.txt @@ -142,8 +142,6 @@ From my perspective, Professor, I'm just doing what you taught me (carve reality I _have_ to keep escalating, because I don't have a choice -I'm trying to keep the rheotrical emphasis on "tale of personal heartbreak, plus careful analysis of the sociopolitical causes of said heartbreak" rather than "attacking my friends and heros" - It shouldn't be a political demand; people should actually process my arguments because they're good arguments "Actually, we're Ashkenazi supremacists" @@ -543,3 +541,7 @@ This situation is _fucked_. I don't care whose "fault" it is. I don't want to "b Katie Herzog and Jesse Singal Heinlein + +https://twitter.com/ESYudkowsky/status/1096769579362115584 + +> When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes. diff --git a/notes/i-tell-myself-sections.md b/notes/i-tell-myself-sections.md index 5a0ec21..7d99dcc 100644 --- a/notes/i-tell-myself-sections.md +++ b/notes/i-tell-myself-sections.md @@ -44,7 +44,7 @@ I can be polite in most circumstances, as the price of keeping the peace in Soci ----- -If we _actually had_ the magical sex change technology described in "Changing Emotions", no one would even be _tempted_ to invent these category-gerrymandering mind games. +If we _actually had_ the magical sex change technology described in "Changing Emotions", no one would even be _tempted_ to invent these category-gerrymandering mind games! People who wanted to change sex would just _do it_, and everyone would use corresponding language because it straightforwardly _described reality_—not as a political favor, or because of some exceedingly clever philosophy argument. If it cost $200K, I would just take out a bank loan and _do it_. @@ -97,7 +97,7 @@ The Popular Author once wrote about how [motivated selective attention paid to w Some readers who aren't part of my robot cult—and some who are—might be puzzled at why I was so emotionally disturbed by people being wrong about philosophy. And for almost anyone else in the world, I would just shrug and [set the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to). -Even people who aren't religious still have the same [species-typical psychological mechanisms](https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind) that make religions work. The systematically-correct-reasoning community had come to fill a [similar niche in my psychology as a religious community](https://www.lesswrong.com/posts/p5DmraxDmhvMoZx8J/church-vs-taskforce). I knew this, but the _hope_ was that this wouldn't come with the pathologies of a religion, because our pseudo-religion was _about_ the rules of systematically correct reasoning. The system is _supposed_ to be self-correcting: if people are _obviously_, demonstratably wrong, all you have to do is show them the argument that they're wrong, and then they'll understand the obvious argument and change their minds. +Even people who aren't religious still have the same [species-typical psychological mechanisms](https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind) that make religions work. The systematically-correct-reasoning community had come to fill a [similar niche in my psychology as a religious community](https://www.lesswrong.com/posts/p5DmraxDmhvMoZx8J/church-vs-taskforce). I knew this, but the _hope_ was that this wouldn't come with the pathologies of a religion, because our pseudo-religion was _about_ the rules of systematically correct reasoning. The system is _supposed_ to be self-correcting: if people are obviously, _demonstratably_ wrong, all you have to do is show them the argument that they're wrong, and then they'll understand the obvious argument and change their minds. So to get a sense of the emotional impact here, imagine a devout Catholic hearing a sermon by their local priest deliver a sermon that says "Sin is good"—or will be predictably interpreted as saying that. [...] @@ -147,11 +147,11 @@ The game theorist Thomas Schelling once wrote about the use of clever excuses to [^schelling]: _Strategy of Conflict_, Ch. 2, "An Essay on Bargaining" -This is sort of what I was trying to do when soliciting—begging for—engagement-or-endorsement of "Where to Draw the Boundaries?" I thought that it ought to be politically feasible to _just_ get public consensus from Very Important People on the _general_ philosophy-of-language issue, stripped of the politicized context that inspired my interest in it, and complete with math and examples about dolphins and job titles. That _should_ be completely safe. If some would-be troublemaker says, "Hey, doesn't this contradict what you said about trans people earlier?", stonewall them. Stonewall _them_, and not _me_. Thus, the public record about philosophy is corrected without the VIPs having to suffer a social-justice scandal. Everyone wins, right? +This is sort of what I was trying to do when soliciting—begging for—engagement-or-endorsement of "Where to Draw the Boundaries?" I thought that it ought to be politically feasible to _just_ get public consensus from Very Important People on the _general_ philosophy-of-language issue, stripped of the politicized context that inspired my interest in it, and complete with math and examples about dolphins and job titles. That _should_ be completely safe. If some would-be troublemaker says, "Hey, doesn't this contradict what you said about trans people earlier?", stonewall them. (Stonewall _them_ and not _me_!) Thus, the public record about philosophy is corrected without the VIPs having to suffer a social-justice scandal. Everyone wins, right? But I guess that's not how politics works. Somehow, the mob-punishment mechanisms that aren't smart enough to understand the concept of "bad argument for a true conclusion", _are_ smart enough to connect the dots between my broader agenda and my (correct) abstract philosophy argument, such that VIPs don't think they can endorse my _correct_ philosophy argument, without it being _construed as_ an endorsement of me and my detailed heresies, even though (a) that's _retarded_ (it's possible to agree with someone about a particular philosophy argument, while disagreeing with them about how the philosophy argument applies to a particular object-level case), and (b) I would have _hoped_ that explaining the abstract philosophy problem in the context of dolphins would provide enough plausible deniability to defend against _retarded people_ who want to make everything about politics. -The situation I'm describing is already pretty fucked, but it would be just barely tolerable if the actually-smart people were good enough at coordinating to _privately_ settle philosophy arguments. If someone says to me, "You're right, but I can't admit this in public because it would be too politically-expensive for me," I can't say I'm not _disappointed_, but I can respect that. +The situation I'm describing is already pretty fucked, but it would be just barely tolerable if the actually-smart people were good enough at coordinating to _privately_ settle philosophy arguments. If someone says to me, "You're right, but I can't admit this in public because it would be too politically-expensive for me," I can't say I'm not _disappointed_, but I can respect that they labor under constraints [people can't trust me to stably keep secrets] @@ -179,14 +179,15 @@ The Popular Author [lightning post assumes invicibility] ----- +The Popular Author definitely isn't trying to be cult leader. He just -The "national borders" metaphor is particularly galling if—[unlike the popular author](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/)—you _actually know the math_. +---- -If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, +The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [the](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/) Popular Author—you _actually know the math_. -"Category boundaries" were just a _visual metaphor_ for the math: the set of things I'll classify as a blegg with probability greater than _p_ is conveniently _visualized_ as an area with a boundary in blueness–eggness space. +If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that _means_ is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, I can use the observation of its shape to update my beliefs about its blegg-category-membership, and then use my beliefs about category-membership to update my beliefs about its blueness. This cognitive algorithm is useful if we live in a world of objects that have the appropriate structure—if the joint distribution P(blegg, blueness, eggness) approximately factorizes as P(blegg)·P(blueness|blegg)·P(eggness|blegg). +"Category boundaries" are just a _visual metaphor_ for the math: the set of things I'll classify as a blegg with probability greater than _p_ is conveniently _visualized_ as an area with a boundary in blueness–eggness space. [wireheading and war are the only two reasons to] -- 2.17.1