From: M. Taylor Saotome-Westlake Date: Sat, 23 Jul 2022 18:37:01 +0000 (-0700) Subject: long confrontation 5 X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=fcd5ff589813c38925abda33020346c06a22b09d;p=Ultimately_Untrue_Thought.git long confrontation 5 The last box on the bottom row of the scratcher is "KEY". --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 9f3d5b8..1a2da3e 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -139,6 +139,8 @@ Sarah shying away, my rallying cry— [TODO: categories clarification from EY—victory?!] +[TODO: I didn't put this together until looking at my email just now, but based on the timing, the Feb. 2021 pronouns post was likely causally downstream of me being temporarily more salient to EY because of my highly-Liked response to his "anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside" from 17 February; it wasn't gratuitously out of the blue] + [TODO: "simplest and best" pronoun proposal, sometimes personally prudent; support from Oli] [TODO: the dolphin war, our thoughts about dolphins are literally downstream from Scott's political incentives in 2014; this is a sign that we're a cult] diff --git a/notes/a-hill-twitter-reply.md b/notes/a-hill-twitter-reply.md index 226126f..372e562 100644 --- a/notes/a-hill-twitter-reply.md +++ b/notes/a-hill-twitter-reply.md @@ -8,11 +8,11 @@ So, now having a Twitter account, I was browsing Twitter in the bedroom at the r > > I will never stand against those who stand against lies. But changing your name, asking people to address you by a different pronoun, and getting sex reassignment surgery, Is. Not. Lying. You are _ontologically_ confused if you think those acts are false assertions. -Some of the replies tried explain the problem—and Yudkowsky kept refusing to understand: +Some of the replies tried explain the problem—and Yudkowsky kept refusing to understand— > Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (chromosomes?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret. -Repeatedly: +—repeatedly: > You're mistaken about what the word means to you, I demonstrate thus: https://en.wikipedia.org/wiki/XX_male_syndrome > @@ -42,7 +42,7 @@ But this seems pretty unsatifying in the context of his claim to ["not [be] taki If the extension of common words like 'woman' and 'man' is an issue of epistemic importance that rationalists should care about, then presumably so is Twitter's anti-misgendering policy—and if it _isn't_ (because you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning) then I'm not sure what's _left_ of the "Human's Guide to Words" sequence if the [37-part grand moral](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) needs to be retracted. -I think it _is_ standing in defense of truth if have an _argument_ for why my preferred word usage does a better job at "carving reality at the joints", and the one bringing my usage explicitly into question doesn't have such an argument. As such, I didn't see the _practical_ difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", and "I can define a word any way I want." About which, a previous Eliezer Yudkowsky had written: +I think it _is_ standing in defense of truth if have an _argument_ for why my preferred word usage does a better job at "carving reality at the joints", and the one bringing my usage explicitly into question doesn't have such an argument. As such, I didn't see the _practical_ difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning", and "I can define a word any way I want." About which, again, a previous Eliezer Yudkowsky had written: > ["It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) > @@ -64,6 +64,32 @@ I think it _is_ standing in defense of truth if have an _argument_ for why my pr > > ["One may even consider the act of defining a word as a promise to \[the\] effect [...] \[that the definition\] will somehow help you make inferences / shorten your messages."](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) +One could argue that this is unfairly interpreting Yudkowsky's Tweets as having a broader scope than was intended—that Yudkowsky _only_ meant to slap down the specific false claim that using 'he' for someone with a Y chromosome is lying, without intending any broader implications about trans issues or the philosophy of language. It wouldn't be realistic or fair to expect every public figure to host a truly exhaustive debate on all related issues every time a fallacy they encounter in the wild annoys them enough for them to Tweet about that specific fallacy. + +However, I don't think this "narrow" reading is the most natural one. Yudkowsky had previously written of what he called [the fourth virtue of evenness](http://yudkowsky.net/rational/virtues/): "If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider." He had likewise written [of reversed stupidity](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) (bolding mine): + +> **To argue against an idea honestly, you should argue against the best arguments of the strongest advocates**. Arguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates. + +[TODO: Weak Men Are Superweapons] + +To be sure, it imposes a cost on speakers to not be able to Tweet about one specific annoying fallacy and then move on with their lives without the need for [endless disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html) about related but stronger arguments that they're _not_ addressing. But the fact that [Yudkowsky disclaimed that](https://twitter.com/ESYudkowsky/status/1067185907843756032) he wasn't taking a stand for or against Twitter's anti-misgendering policy demonstrates that you _didn't_ have an aversion to spending a few extra words to prevent the most common misunderstandings. + + + + + + + + + + + + + + + + + @@ -135,23 +161,36 @@ Because of my hero worship, "he's being intellectually dishonest in response to -(because we can define a word any way we like), -From the text of your Tweets and your subsequent email clarification, I understand that you say your intent was specifically *only* to slap down the "Using 'he' for someone with a Y chromosome is lying" fallacy, and not to imply anything further. And I understand that it wouldn't be realistic or fair to expect every public figure to host a truly exhaustive debate on all related issues every time a fallacy they encounter in the wild annoys them enough for them to Tweet about that specific fallacy. -Although it's important to check whether you've _actually_ encountered the annoying fallacy in the wild, rather than your [cached](https://www.lesswrong.com/posts/2MD3NMLBPCqPfnfre/cached-thoughts) misconception of what other people are saying. Your email of 29 November seems to imply that you were trying to subtweet Eric Weinstein—presumably his [7-tweet thread](https://twitter.com/EricRWeinstein/status/1067112939482574848) earlier the same day as your Tweets at issue? If so, I think you're strawmanning him pretty badly! If you reread _his_ Tweets (["This banning of 'deadnaming' is preposterous"](https://twitter.com/EricRWeinstein/status/1067112940686299136), ["treating Trans M/F *exactly* the same as born M/F would be medical malpractice"](https://twitter.com/EricRWeinstein/status/1067112943186145280)), I don't think he's _saying_ anything that could be summarized as "Pronouns are facts/lies!", _especially_ given that [he _explicitly says_](https://twitter.com/EricRWeinstein/status/1067112948529684480), "almost all of us who fight this issue in the #IDW voluntarily use people’s preferred pronouns". -In any case, I hope you can see why I would be concerned about the *effects* of your Tweets on your readers, whatever your private intent? It is written of [the fourth virtue](http://yudkowsky.net/rational/virtues/): "If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider." It is likewise written [of reversed stupidity](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence): -> **To argue against an idea honestly, you should argue against the best arguments of the strongest advocates**. \[bolding mine—ZMD\] Arguing against weaker advocates proves *nothing*, because even the strongest idea will attract weak advocates. -As _geniunely inconvenient_ as it is for people who just want to Tweet about one annoying fallacy and then move on with their lives without the need for [endless disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html), I fear that if you *just* slap down a [weak man](http://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) from the "anti-trans" political coalition and then *don't say anything else* in a similarly prominent location, then you're knowably, predictably making your *readers* that much stupider, which has negative consequences for your "advancing the art of human rationality" project, even if you haven't said anything false—particularly because people look up to you as the one who taught them to aspire to a _[higher](https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger) [standard](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible)_ [than](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) merely not-lying. -The fact that [you disclaimed that](https://twitter.com/ESYudkowsky/status/1067185907843756032) you weren't taking a stand for or against Twitter's anti-misgendering policy demonstrates that you don't have a Hansonian aversion to spending a few extra words to prevent the most common misunderstandings. You've also previously written about a speaker's duty to teach the virtue of evenness [in "Knowing About Biases Can Hurt People"](https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people) (which I think should go the same for "Knowing About Philosophy of Language Can Hurt People"): -> Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm! +As _geniunely inconvenient_ as it is for people who just want + + + + + + + + +If you *just* slap down a [weak man](http://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) from the "anti-trans" political coalition and then _don't say anything else_ in a similarly prominent location + + + +, then you're knowably, predictably making your _readers_ that much stupider. + + +, which has negative consequences for his "advancing the art of human rationality" project, even if you haven't said anything false—particularly because people look up to you as the one who taught them to aspire to a _[higher](https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger) [standard](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible)_ [than](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [merely not-lying](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). + + + + + -A decade ago, back when you were [trying to explain your metaethics](https://wiki.lesswrong.com/wiki/Metaethics_sequence), I was part of the crowd saying, "Wait, how is this any different from anti-realism? You agree that there's no morality in the laws of physics themselves." And my present-day paraphrase of your reply—which still might not match what you actually believe, but is much more than I understood then—would be: "I agree that there's no morality written in the laws of physics, no [ontological XML tag](https://www.lesswrong.com/posts/PoDAyQMWEXBBBEJ5P/magical-categories) of Intrinsic Goodness, but you're not *done* [dissolving the question](https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question) just by making that observation. People seem to be doing *cognitive work* when they argue about morality. What is the nature of that cognitive work? *That* is the question I am attempting to answer." Similarly with categories in general, and sex (or "gender") categorization in particular. It's true that the same word can be used in many ways depending on context. But you're _not done_ dissolving the question just by making that observation. And the one who triumphantly shouts in the public square, "And *therefore*, people who object to my preferred use of language are ontologically confused!" is *ignoring the interesting part of the problem*. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) mostly functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). @@ -220,7 +259,3 @@ If you were Alice, and a _solid supermajority_ of your incredibly smart, incredi With respect to transgender issues, this certainly _can_ go both ways: somewhere on Twitter, there are members of the "anti-trans" political coalition insisting, "No, that's _really_ a man because chromosomes" even though they know that that's not what members of the "pro-trans" coalition mean—although as stated earlier, I don't think Eric Weinstein is guilty of this. But given the likely distribution of your Twitter followers and what they need to hear, I'm very worried about the _consequences_ (again, remaining agnostic about your private intent) of slapping down the "But, but, chromosomes" idiocy and then not engaging with the _drop-dead obvious_ "But, but, clusters in high-dimensional configuration space that [aren't actually changeable with contemporary technology](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)" steelman. It makes sense that (I speculate) you might perceive political constraints on what you want to say in public. (I still work under a pseudonym myself; it would be wildly hypocritical of me to accuse anyone else of cowardice!) But I suspect that if you want to not get into a distracting political fight about topic X, then maybe the responsible thing to do is just not say anything about topic X, rather than engaging with the _stupid_ version of anti-X, and then [stonewalling](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters) with "That's a policy question" when people [try to point out the problem](https://twitter.com/samsaragon/status/1067238063816945664)? Having already done so, is it at all possible that you might want to provide your readers a clarification that [category boundaries are not arbitrary](http://lesswrong.com/lw/o0/where_to_draw_the_boundary/) and that [actually changing sex is a very hard technical problem](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions)? Or if you don't want to do that publicly, maybe an internal ["Halt, Melt, and Catch Fire event"](https://www.lesswrong.com/rationality/fighting-a-rearguard-action-against-the-truth) is in order on whatever processes led to the present absurd situation (where I feel like I need to explain "A Human's Guide to Words" to you, and Michael, Sarah, and Ben all seem to think I have a point)? Alternatively, if _I'm_ philosophically in the wrong here, is there a particular argument I'm overlooking? I should be overjoyed to be corrected if I am in error, but "The rest of human civilization is a trash fire so whatever" is _not a counterargument!_ - - - -