From: M. Taylor Saotome-Westlake Date: Mon, 16 Sep 2019 05:44:57 +0000 (-0700) Subject: I tell myself to let the story end X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=32c4be31dfbb8c0eea54a2306865f69e20c9bdbb;p=Ultimately_Untrue_Thought.git I tell myself to let the story end --- diff --git a/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md index d8c302e..556ee9e 100644 --- a/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/i-tell-myself-to-let-the-story-end-or-a-hill-of-validity-in-defense-of-meaning.md @@ -16,7 +16,7 @@ Status: draft I mostly haven't been doing so well for the past ten months or so. I mean, I've always been a high-neuroticism person, but this has probably been a below-average year even by my standards, with _many_ hours of lost sleep, occasional crying bouts, _many, many_ hours of obsessive ruminating-while-pacing instead of doing my dayjob, and too long with a Sara Barellies song on loop to numb the pain. I've been reluctant to write about it in too much detail for poorly-understood psychological reasons. Maybe it would feel too much like attacking my friends? -But this blog is not about _not_ attacking my friends. This blog is about the truth. For my own sanity, for my own emotional closure, I need to tell the story as best I can. If it's an _incredibly boring and petty_ story about people getting _unreasonably angry_ about philosophy-of-language minutiæ, well, you've been warned. If the story makes me look bad in the reader's eyes (because you think I'm crazy for getting so unreasonably angry about philosophy-of-language minutiæ), then I shall be happy to look bad for _what I actually am_. (If _telling the truth_ about what I've been obsessively preoccupied with lately makes you dislike me, then you probably _should_ dislike me. If you were to approve of me on the basis of _factually inaccurate beliefs_, then the thing of which you approve, wouldn't be _me_.) +But this blog is not about _not_ attacking my friends. This blog is about the truth. For my own sanity, for my own emotional closure, I need to tell the story as best I can. If it's an _incredibly boring and petty_ story about me getting _unreasonably angry_ about philosophy-of-language minutiæ, well, you've been warned. If the story makes me look bad in the reader's eyes (because you think I'm crazy for getting so unreasonably angry about philosophy-of-language minutiæ), then I shall be happy to look bad for _what I actually am_. (If _telling the truth_ about what I've been obsessively preoccupied with all year makes you dislike me, then you probably _should_ dislike me. If you were to approve of me on the basis of _factually inaccurate beliefs_, then the thing of which you approve, wouldn't be _me_.) So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _systematically correct reasoning_. Sure, anyone will _say_ that their beliefs are true, but you can tell most people aren't being very serious about it. _We_ were going to be serious: starting with the shared canon of knowledge of cognitive biases, reflectivity, and Bayesian probability theory bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/). @@ -28,9 +28,7 @@ I guess I feel pretty naïve now, but—I _actually believed our own propoganda_ [...] -(I'm avoiding naming anyone in this post even when linking to their public writings, in order to try to keep the _rhetorical emphasis_ on "true tale of personal heartbreak, coupled with sober analysis of the sociopolitical factors leading thereto." This isn't supposed to be character/reputation attack on my friends and intellectual heroes; I just I don't _know how_ to tell the true tale of personal heartbreak without expressing some degree of disappointment in some people's characters. It is written that ["almost no one is evil; almost everything is broken."](https://blog.jaibot.com/). And the _first_ step towards fixing that which is broken, is _describing the problem_.) - - +(I'm avoiding naming anyone in this post even when linking to their public writings, in order to try to keep the _rhetorical emphasis_ on "true tale of personal heartbreak, coupled with sober analysis of the sociopolitical factors leading thereto." This isn't supposed to be character/reputation attack on my friends and intellectual heroes—or the closest analogues to "friends" or "heroes" I've got. I just don't _know how_ to tell the true tale of personal heartbreak without expressing some degree of disappointment in some people's characters. It is written that ["almost no one is evil; almost everything is broken."](https://blog.jaibot.com/). And [the _first_ step](https://www.lesswrong.com/posts/uHYYA32CKgKT3FagE/hold-off-on-proposing-solutions) towards fixing that which is broken, is _describing the problem_.) [...] @@ -39,7 +37,7 @@ So, I think this is a bad argument. But specifically, it's a bad argument for _c In 2008, the Great Teacher had this really amazing series of posts explaining the hidden probability-theoretic structure of language and cognition. Essentially, explaining _natural language as an AI capability_. What your brain is doing when you [see a tiger and say, "Yikes! A tiger!"](https://www.lesswrong.com/posts/dMCFk2n2ur8n62hqB/feel-the-meaning) is governed the [simple math](https://www.lesswrong.com/posts/HnPEpu5eQWkbyAJCT/the-simple-math-of-everything) by which intelligent systems make observations, use those observations to assign category-membership, and use category-membership to make predictions about properties which have not yet been observed. _Words_, language, are an information-theoretically efficient _code_ for such systems to share cognitive content. -And these posts hammered home the point over and over and over and _over_ again—culminating in the 37-part grand moral—that word and category definitions are _not_ arbitrary—there are optimality criteria that make some definitions _perform better_ than others as "cognitive technology"— +And these posts hammered home the point over and over and over and _over_ again—culminating in [the 37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)—that word and category definitions are _not_ arbitrary, because there are optimality criteria that make some definitions _perform better_ than others as "cognitive technology"— > ["It is a common misconception that you can define a word any way you like."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) @@ -47,29 +45,28 @@ And these posts hammered home the point over and over and over and _over_ again > ["When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."](https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions) -> [There's an idea, which you may have noticed I hate, that "you can define a word any way you like".](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels) +> ["There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."](https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels) -> [And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying "I can define a word any way I like".](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression) +> ["And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."](https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression) -> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) +> ["Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."](https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences) > ["And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) -> [And this suggests another—yes, yet another—reason to be suspicious of the claim that "you can define a word any way you like". When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power.](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) +> ["And this suggests another—yes, yet another—reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power."](https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words) -> [I say all this, because the idea that "You can X any way you like" is a huge obstacle to learning how to X wisely. "It's a free country; I have a right to my own opinion" obstructs the art of finding truth. "I can define a word any way I like" obstructs the art of carving reality at its joints. And even the sensible-sounding "The labels we attach to words are arbitrary" obstructs awareness of compactness.](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) +> ["I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."](https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes) > ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) +Similarly, the Popular Author himself has written extensively about [the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world), which he called _the worst argument in the world_, so +[...] - - +You see the problem. This is _basic shit_. As we say locally, this is _basic Sequences shit_. - - we did not realize that _whether I should cut my dick off_ would become a politicized issue. Now, it's not obvious that I _shouldn't_ cut my dick off! A lot of people seem to be doing it nowadays, and a lot of them seem to be pretty happy with their decision! But in order to _decide_ whether it's a good idea, I need _accurate information_. I need an _honest_ accounting of the costs and benefits of transition, so that I can cut my dick off in the possible worlds where that's a good idea, and not cut my dick off in the possible worlds where that's not a good idea. @@ -78,14 +75,6 @@ Now, it's not obvious that I _shouldn't_ cut my dick off! A lot of people seem t actively manufacture _fake rationality lessons_ that have been optimized to confuse me into cutting my dick off _independently_ of whether or not we live in a world - - - The "I can define the word 'woman' any way I want" argument is bullshit. All the actually-smart people know that it's bullshit at _some_ level, perhaps semi-consciously buried under a lot of cognitive dissonance. But it's _socially load-bearing_ bullshit that _not only_ does almost no one have an incentive to correct— - - - But no one has the incentive to correct the mistake in public. - - diff --git a/notes/i-tell-myself-notes.txt b/notes/i-tell-myself-notes.txt index 9147f23..1420adb 100644 --- a/notes/i-tell-myself-notes.txt +++ b/notes/i-tell-myself-notes.txt @@ -21,6 +21,9 @@ OUTLINE * if part of the resistacne to an honest cost/benefit analysis is +* reasonable vs. unreasonable misunderstandings + + * what did I expect, taking on an egregore so much bigger than me? * if I agree that people should be allowed to transition, why am I freaking out? Because I _actually care about getting the theory correct_ * culture matters: if you're surrounded by crazy people @@ -184,7 +187,7 @@ analogy to school -https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong + @@ -246,4 +249,33 @@ _out of cards_ chapter and verse +I will try to be less silly about "my thing is actually important for the world" claims, when what I really mean is that I'm just not a consequentialist about speech + +"Outside the Laboratory" dumbness could be selection rather than causal + +I don't expect anyone to take a stand for a taboo topic that they don't care about +I would have expected Scott and/or Eliezer to help clarify the philosophy-of-language mistake, because that's core sequences material +but I was wrong + +I was imagining that it should be safe to endorse my "... Boundaries?" post, because the post is about philosophy, and surely it should be possible to endorse a specific article without that being taken as an endorsement of the author +but ... I guess that's not how politics works + +They can't _trust_ me not to leverage consensus on the categories-aren't-arbitrary for my object-level thing, in a way that would go on RationalWiki and SneerClub + +The counterargument that Dark Side Epistemology isn't that recursive + +----- + +I would _never_ write someone off for disagreeing with me about a complicated empirical question in psychology. Psychology is _really complicated_—and psychology questions that impinge on hot-button culture war issues are subject to additional biasing pressures. In this domain, no matter how "obvious" I think something is, I have to [take the Outside View seriously](http://www.overcomingbias.com/2007/07/beware-the-insi.html) and + +https://slatestarcodex.com/2015/08/15/my-id-on-defensiveness/ +https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/ +https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/ Popular Author is very tired +https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/ + +Savvy people have an incentive to stonewall me until I give up and go away, and on any other subject where I didn't have Something to Protect, it would have _worked_ + +Lying or gerrymandering an individual object-level category boundary is forgivable; constructing a clever philosophical argument that lying is OK, is not + +Another kind of asymmetric weapon: whether this narrative "looks worse" for me or my subjects depends on the audience (you can read it as a tale of betrayal of sacred principles, or a tale of personal mental illness)