From 69d93bfea0c3b82d848a0ab30b2d7c6457c29bfe Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Fri, 14 Jul 2023 22:08:18 -0700 Subject: [PATCH] memoir: pt. 2 homestretch tweaks Is the paragraph about my history with Ziz too many words? I think it adds "color" to mention how my Dumb Story overlaps with another community drama case. --- ...-hill-of-validity-in-defense-of-meaning.md | 42 ++++++++++--------- notes/memoir-sections.md | 14 +++---- 2 files changed, 28 insertions(+), 28 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index c173c48..d296711 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -25,7 +25,7 @@ This is wrong because categories exist in our model of the world _in order to_ c In the case of Alexander's bogus argument about gender categories, the relevant principle ([#30](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) on [the list of 37](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong)) is that if you group things together in your map that aren't actually similar in the territory, you're going to make bad inferences. -Crucially, this is a general point about how language itself works that has _nothing to do with gender_. No matter what you believe about politically-controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [_X_] or two deep inside the conceptual boundaries of what would normally be considered [_Y_] if [positive consequence]" is not the correct philosophy of language, _independently of the particular values of X and Y_. +Crucially, this is a general point about how language itself works that has _nothing to do with gender_. No matter what you believe about controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [_X_] or two deep inside the conceptual boundaries of what would normally be considered [_Y_] if [positive consequence]" is not the correct philosophy of language, _independently of the particular values of X and Y_. This wasn't even what I was trying to talk to people about. _I_ thought I was trying to talk about autogynephilia as an empirical theory of psychology of late-onset gender dysphoria in males, the truth or falsity of which cannot be altered by changing the meanings of words. But at this point, I still trusted people in my robot cult to be basically intellectually honest, rather than slaves to their political incentives, so I endeavored to respond to the category-boundary argument under the assumption that it was an intellectually serious argument that someone could honestly be confused about. @@ -117,7 +117,7 @@ Relatedly, Scott Alexander had written about how ["weak men are superweapons"](h To be sure, it imposes a cost on speakers to not be able to Tweet about one specific annoying fallacy and then move on with their lives without the need for [endless disclaimers](http://www.overcomingbias.com/2008/06/against-disclai.html) about related but stronger arguments that they're not addressing. But the fact that [Yudkowsky disclaimed that](https://twitter.com/ESYudkowsky/status/1067185907843756032) he wasn't taking a stand for or against Twitter's anti-misgendering policy demonstrates that he _didn't_ have an aversion to spending a few extra words to prevent the most common misunderstandings. -Given that, it's hard to read the Tweets Yudkowsky published as anything other than an attempt to intimidate and delegitimize people who want to use language to reason about sex rather than gender identity. For example, deeper in the thread, [Yudkowsky wrote](https://twitter.com/ESYudkowsky/status/1067490362225156096): +Given that, it's hard to read the Tweets Yudkowsky published as anything other than an attempt to intimidate and delegitimize people who want to use language to reason about sex rather than gender identity. It's just not plausible that Yudkowsky was simultaneously savvy enough to choose to make these particular points while also being naïve enough to not understand the political context. Deeper in the thread, [he wrote](https://twitter.com/ESYudkowsky/status/1067490362225156096): > The more technology advances, the further we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts. Who competes in sports segregated around an Aristotelian binary is a policy question (that I personally find very humorous). @@ -203,23 +203,23 @@ Again, I realize this must seem weird and cultish to any normal people reading t Anna didn't reply, but I apparently did interest Michael, who chimed in on the email thread to Yudkowsky. We had a long phone conversation the next day lamenting how the "rationalists" were dead as an intellectual community. -As for the attempt to intervene on Yudkowsky—here I need to make a digression about the constraints I'm facing in telling this Whole Dumb Story. _I_ would prefer to just tell this Whole Dumb Story as I would to my long-neglected Diary—trying my best at the difficult task of explaining what actually happened during a very important part of my life, without thought of concealing anything. +As for the attempt to intervene on Yudkowsky—here I need to make a digression about the constraints I'm facing in telling this Whole Dumb Story. _I_ would prefer to just tell this Whole Dumb Story as I would to my long-neglected Diary—trying my best at the difficult task of explaining what actually happened during an important part of my life, without thought of concealing anything. (If you are silent about your pain, _they'll kill you and say you enjoyed it_.) -Unfortunately, a lot of other people seem to have strong intuitions about "privacy", which bizarrely impose constraints on what _I'm_ allowed to say about my own life: in particular, it's considered unacceptable to publicly quote or summarize someone's emails from a conversation that they had reason to expect to be private. I feel obligated to comply with these widely held privacy norms, even if _I_ think they're paranoid and [anti-social](http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy/). (This secrecy-hating trait probably correlates with the autogynephilia blogging; someone otherwise like me who believed in privacy wouldn't be telling you this Whole Dumb Story.) +Unfortunately, a lot of other people seem to have strong intuitions about "privacy", which bizarrely impose constraints on what _I'm_ allowed to say about my own life: in particular, it's considered unacceptable to publicly quote or summarize someone's emails from a conversation that they had reason to expect to be private. I feel obligated to comply with these widely-held privacy norms, even if _I_ think they're paranoid and [anti-social](http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy/). (This secrecy-hating trait probably correlates with the autogynephilia blogging; someone otherwise like me who believed in privacy wouldn't be telling you this Whole Dumb Story.) So I would _think_ that while telling this Whole Dumb Story, I obviously have an inalienable right to blog about _my own_ actions, but I'm not allowed to directly refer to private conversations with named individuals in cases where I don't think I'd be able to get the consent of the other party. (I don't think I'm required to go through the ritual of asking for consent in cases where the revealed information couldn't reasonably be considered "sensitive", or if I know the person doesn't have hangups about this weird "privacy" thing.) In this case, I'm allowed to talk about emailing Yudkowsky (because that was _my_ action), but I'm not allowed to talk about anything he might have said in reply, or whether he did. Unfortunately, there's a potentially serious loophole in the commonsense rule: what if some of my actions (which I would have hoped to have an inalienable right to blog about) _depend on_ content from private conversations? You can't, in general, only reveal one side of a conversation. -Suppose Alice messages Bob at 5 _p.m._, "Can you come to the party?", and also, separately, that Alice messages Bob at 6 _p.m._, "Gout isn't contagious." Should Alice be allowed to blog about the messages she sent at 5 _p.m._ and 6 _p.m._, because she's only describing her own messages and not confirming or denying whether Bob replied at all, let alone quoting him? +Suppose Carol messages Dave at 5 _p.m._, "Can you come to the party?", and also, separately, that Carol messages Dave at 6 _p.m._, "Gout isn't contagious." Should Carol be allowed to blog about the messages she sent at 5 _p.m._ and 6 _p.m._, because she's only describing her own messages and not confirming or denying whether Dave replied at all, let alone quoting him? -I think commonsense privacy-norm-adherence intuitions actually say _No_ here: the text of Alice's messages makes it too easy to guess that sometime between 5 and 6, Bob probably said that he couldn't come to the party because he has gout. It would seem that Alice's right to talk about her own actions in her own life _does_ need to take into account some commonsense judgement of whether that leaks "sensitive" information about Bob. +I think commonsense privacy-norm-adherence intuitions actually say _No_ here: the text of Carol's messages makes it too easy to guess that sometime between 5 and 6, Dave probably said that he couldn't come to the party because he has gout. It would seem that Carol's right to talk about her own actions in her own life _does_ need to take into account some commonsense judgement of whether that leaks "sensitive" information about Dave. In the substory (of my Whole Dumb Story) that follows, I'm going to describe several times that I and others emailed Yudkowsky to argue with what he said in public, without saying anything about whether Yudkowsky replied or what he might have said if he did reply. I maintain that I'm within my rights here, because I think commonsense judgment will agree that me talking about the arguments _I_ made does not leak any sensitive information about the other side of a conversation that may or may not have happened. I think the story comes off relevantly the same whether Yudkowsky didn't reply at all (_e.g._, because he was too busy with more existentially important things to check his email), or whether he replied in a way that I found sufficiently unsatisfying as to occasion the further emails with followup arguments that I describe. (Talking about later emails _does_ rule out the possible world where Yudkowsky had said, "Please stop emailing me," because I would have respected that, but the fact that he didn't say that isn't "sensitive".) -It seems particularly important to lay out these judgments about privacy norms in connection to my attempts to contact Yudkowsky, because part of what I'm trying to accomplish in telling this Whole Dumb Story is to deal reputational damage to Yudkowsky, which I claim is deserved. (We want reputations to track reality. If you see Carol exhibiting a pattern of intellectual dishonesty, and she keeps doing it even after you talk to her about it privately, you might want to write a blog post describing the pattern in detail—not to _hurt_ Carol, particularly, but so that everyone else can make higher-quality decisions about whether they should believe the things that Carol says.) Given that motivation of mine, it seems important that I only try to hang Yudkowsky with the rope of what he said in public, where you can click the links and read the context for yourself. In the substory that follows, I also describe correspondence with Scott Alexander, but that doesn't seem sensitive in the same way, because I'm not particularly trying to deal reputational damage to Alexander. (Not because Scott performed well, but because one wouldn't really have expected him to in this situation; Alexander's reputation isn't so direly in need of correction.) +It seems particularly important to lay out these judgments about privacy norms in connection to my attempts to contact Yudkowsky, because part of what I'm trying to accomplish in telling this Whole Dumb Story is to deal reputational damage to Yudkowsky, which I claim is deserved. (We want reputations to track reality. If you see Erin exhibiting a pattern of intellectual dishonesty, and she keeps doing it even after you talk to her about it privately, you might want to write a blog post describing the pattern in detail—not to _hurt_ Erin, particularly, but so that everyone else can make higher-quality decisions about whether they should believe the things that Erin says.) Given that motivation of mine, it seems important that I only try to hang Yudkowsky with the rope of what he said in public, where you can click the links and read the context for yourself: I'm attacking him, but not betraying him. In the substory that follows, I also describe correspondence with Scott Alexander, but that doesn't seem sensitive in the same way, because I'm not particularly trying to deal reputational damage to Alexander. (Not because Scott performed well, but because one wouldn't really have expected him to in this situation; Alexander's reputation isn't so direly in need of correction.) Thus, I don't think I should say whether Yudkowsky replied to Michael's and my emails, nor (again) whether he accepted the cheerful-price money, because any conversation that may or may not have occurred would have been private. But what I _can_ say, because it was public, is that we saw [this addition to the Twitter thread](https://twitter.com/ESYudkowsky/status/1068071036732694529): @@ -253,7 +253,7 @@ The reason to write this as a desperate email plea to Scott Alexander instead of Back in 2010, the rationalist community had a shared understanding that the function of language is to describe reality. Now, we didn't. If Scott didn't want to cite my creepy blog about my creepy fetish, that was fine; I liked getting credit, but the important thing was that this "No, the Emperor isn't naked—oh, well, we're not claiming that he's wearing any garments—it would be pretty weird if we were claiming _that!_—it's just that utilitarianism implies that the _social_ property of clothedness should be defined this way because to do otherwise would be really mean to people who don't have anything to wear" maneuver needed to _die_, and he alone could kill it. -Scott didn't get it. We agreed that gender categories based on self-identity, natal sex, and passing each had their own pros and cons, and that it's uninteresting to focus on whether something "really" belongs to a category rather than on communicating what you mean. Scott took this to mean that what convention to use is a pragmatic choice we can make on utilitarian grounds, and that being nice to trans people was worth a little bit of clunkiness—that the mental health benefits to trans people were obviously enough to tip the first-order uilitarian calculus. +Scott didn't get it. We agreed that gender categories based on self-identity, natal sex, and passing each had their own pros and cons, and that it's uninteresting to focus on whether something "really" belongs to a category rather than on communicating what you mean. Scott took this to mean that what convention to use is a pragmatic choice we can make on utilitarian grounds, and that being nice to trans people was worth a little bit of clunkiness—that the mental health benefits to trans people were obviously enough to tip the first-order utilitarian calculus. I didn't think anything about "mental health benefits to trans people" was obvious. More importantly, I considered myself to be prosecuting not the object-level question of which gender categories to use but the meta-level question of what normative principles govern the use of categories. For this, "whatever, it's a pragmatic choice, just be nice" wasn't an answer, because the normative principles exclude "just be nice" from being a relevant consideration. @@ -295,13 +295,15 @@ It was also around this time that our posse picked up a new member, whom I'll ca ----- -On 5 January 2019, I met with Michael and his associate Aurora Quinn-Elmore in San Francisco to attempt mediated discourse with [Ziz](https://sinceriously.fyi/) and [Gwen](https://everythingtosaveit.how/), who were considering suing the [Center for Applied Rationality](https://rationality.org/) (CfAR)[^what-is-cfar] for discriminating against trans women. Michael hoped to dissuade them from a lawsuit—not because he approved of CfAR's behavior, but because lawyers make everything worse. +On 5 January 2019, I met with Michael and his associate Aurora Quinn-Elmore in San Francisco to attempt mediated discourse with [Ziz](https://web.archive.org/web/20230601015012/https://sinceriously.fyi/) and [Gwen](https://web.archive.org/web/20230308021910/https://everythingtosaveit.how/), who were considering suing the [Center for Applied Rationality](https://rationality.org/) (CfAR)[^what-is-cfar] for discriminating against trans women. Michael hoped to dissuade them from a lawsuit—not because he approved of CfAR's behavior, but because lawyers make everything worse. [^what-is-cfar]: CfAR had been spun off from MIRI in 2012 as a dedicated organization for teaching rationality. -Ziz recounted [her](/2019/Oct/self-identity-is-a-schelling-point/) story of how Anna Salamon (in her capacity as President of CfAR and community leader) allegedly engaged in [conceptual warfare](https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) to falsely portray Ziz as a predatory male. I was unimpressed: in my worldview, I didn't think Ziz had the right to say "I'm not a man," and expect people to just believe that. ([I remember that](https://twitter.com/zackmdavis/status/1081952880649596928) at one point, Ziz answered a question with, "Because I don't run off masochistic self-doubt like you." I replied, "That's fair.") But I did respect that Ziz actually believed in an intersex brain theory: in Ziz and Gwen's worldview, people's genders were a _fact_ of the matter, not a manipulation of consensus categories to make people happy. +Despite our personality and worldview differences, I had had a number of cooperative interactions with Ziz a couple years before. We had argued about the etiology of transsexualism in late 2016. When I sent her some delusional PMs during my February 2017 psychotic break, she came over to my apartment with chocolate ("allegedly good against dementors"), although I wasn't there. I had awarded her $1200 as part of a [credit-assignment ritual](http://zackmdavis.net/blog/2017/03/friends-can-change-the-world-or-request-for-social-technology-credit-assignment-rituals/) to compensate the twenty-one people who were most responsible for me successfully navigating my psychological crises of February and April 2017. (The fact that she had been up to _argue_ about trans etiology meant a lot to me.) I had accepted some packages for her at my apartment in mid-2017 when she was preparing to live on a boat and didn't have a mailing address. -Probably the most ultimately consequential part of this meeting was Michael verbally confirming to Ziz that MIRI had settled with a disgruntled former employee, Louie Helm, who had put up [a website slandering them](https://archive.ph/Kvfus). (I don't know the details of the alleged settlement. I'm working off of [Ziz's notes](https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) rather than remembering that part of the conversation clearly myself; I don't know what Michael knew.) What was significant was that if MIRI _had_ paid Helm as part of an agreement to get the slanderous website taken down, then (whatever the nonprofit best-practice books might have said about whether this was a wise thing to do when facing a dispute from a former employee) that would decision-theoretically amount to a blackmail payout, which seemed to contradict MIRI's advocacy of timeless decision theories (according to which you [shouldn't be the kind of agent that yields to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/)). +At this meeting, Ziz recounted [her](/2019/Oct/self-identity-is-a-schelling-point/) story of how Anna Salamon (in her capacity as President of CfAR and community leader) allegedly engaged in [conceptual warfare](https://web.archive.org/web/20230601044116/https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) to falsely portray Ziz as a predatory male. I was unimpressed: in my worldview, I didn't think Ziz had the right to say "I'm not a man," and expect people to just believe that. ([I remember that](https://twitter.com/zackmdavis/status/1081952880649596928) at one point, Ziz answered a question with, "Because I don't run off masochistic self-doubt like you." I replied, "That's fair.") But I did respect that Ziz actually believed in an intersex brain theory: in Ziz and Gwen's worldview, people's genders were a _fact_ of the matter, not a manipulation of consensus categories to make people happy. + +Probably the most ultimately consequential part of this meeting was Michael verbally confirming to Ziz that MIRI had settled with a disgruntled former employee, Louie Helm, who had put up [a website slandering them](https://archive.ph/Kvfus). (I don't know the details of the alleged settlement. I'm working off of [Ziz's notes](https://web.archive.org/web/20230601044116/https://sinceriously.fyi/intersex-brains-and-conceptual-warfare/) rather than remembering that part of the conversation clearly myself; I don't know what Michael knew.) What was significant was that if MIRI _had_ paid Helm as part of an agreement to get the slanderous website taken down, then (whatever the nonprofit best-practice books might have said about whether this was a wise thing to do when facing a dispute from a former employee) that would decision-theoretically amount to a blackmail payout, which seemed to contradict MIRI's advocacy of timeless decision theories (according to which you [shouldn't be the kind of agent that yields to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/)). ---- @@ -367,7 +369,7 @@ It made sense for Anna to not like Michael anymore because of his personal condu ----- -I wasn't the only one whose life was being disrupted by political drama in early 2019. On 22 February, Scott Alexander [posted that the /r/slatestarcodex Culture War Thread was being moved](https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/) to a new non–_Slate Star Codex_–branded subreddit in the hopes that would curb some of the harrassment he had been receiving. Alexander claimed that according to poll data and his own impressions, the Culture War Thread featured a variety of ideologically diverse voices but had nevertheless acquired a reputation as being a hive of right-wing scum and villainy. +I wasn't the only one whose life was being disrupted by political drama in early 2019. On 22 February, Scott Alexander [posted that the /r/slatestarcodex Culture War Thread was being moved](https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/) to a new non–_Slate Star Codex_–branded subreddit in the hopes that would curb some of the harassment he had been receiving. Alexander claimed that according to poll data and his own impressions, the Culture War Thread featured a variety of ideologically diverse voices but had nevertheless acquired a reputation as being a hive of right-wing scum and villainy. [Yudkowsky Tweeted](https://twitter.com/ESYudkowsky/status/1099134795131478017): @@ -381,7 +383,7 @@ How would Yudkowsky react if someone said that? _My model_ of the Sequences-era But I had no idea what the real Yudkowsky of 2019 would say. If the moral of the "hill of meaning in defense of validity" thread had been that the word "lie" should be reserved for _per se_ direct falsehoods, well, what direct falsehood was being asserted by Scott's detractors? I didn't think anyone was claiming that, say, Scott _identified_ as alt-right, any more than anyone was claiming that trans women have two X chromosomes. Commenters on /r/SneerClub had been pretty explicit in [their](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh0xlgx/) [criticism](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh3jrth/) that the Culture War thread harbored racists (_&c._) and possibly that Scott himself was a secret racist, with respect to a definition of racism that included the belief that there exist genetically mediated population differences in the distribution of socially relevant traits and that this probably had decision-relevant consequences that should be discussable somewhere. -And this was _correct_. For example, Alexander's ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) favorably cites Cochran _et al._'s genetic theory of Ashkenazi achievement as "really compelling." Scott was almost certainly "guilty" of the category membership that the speech was meant to convey—it's just that Sneer Club got to choose the category. If a machine-learning classifer returns positive on both Scott Alexander and Richard Spencer, the correct response is not that the classifier is "lying" (what would that even mean?) but that the classifier is not very useful for understanding Scott Alexander's effects on the world. +And this was _correct_. For example, Alexander's ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) favorably cites Cochran _et al._'s genetic theory of Ashkenazi achievement as "really compelling." Scott was almost certainly "guilty" of the category membership that the speech was meant to convey—it's just that Sneer Club got to choose the category. If a machine-learning classifier returns positive on both Scott Alexander and Richard Spencer, the correct response is not that the classifier is "lying" (what would that even mean?) but that the classifier is not very useful for understanding Scott Alexander's effects on the world. Of course, Scott is great, and it was right that we should defend him from the bastards trying to ruin his reputation, and it was plausible that the most politically convenient way to do that was to pound the table and call them lying sociopaths rather than engaging with the substance of their claims—much as how someone being tried under an unjust law might plead "Not guilty" to save their own skin rather than tell the whole truth and hope for [jury nullification](https://en.wikipedia.org/wiki/Jury_nullification). @@ -389,9 +391,9 @@ But, I argued, political convenience came at a dire cost to [our common interest Similarly, once someone is known to [vary](https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) the epistemic standards of their public statements for political convenience—if they say categorizations can be lies when that happens to help their friends, but seemingly deny the possibility when that happens to make them look good politically ... -Well, you're still better off listening to them than the whistling of the wind, because the wind in various possible worlds is presumably uncorrelated with most of the things you want to know about, whereas [clever arguers](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence) who [don't tell explicit lies](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/) are constrained in how much they can mislead you. But it seems plausible that you might as well listen to any other arbitrary smart person with a blue check and 20K Twitter followers. It might be a useful exercise, for Yudkowsky to think of what he would _actually say_ if someone with social power _actually did this to him_ when he was trying to use language to reason about Something he had to Protect? +Well, you're still better off listening to them than the whistling of the wind, because the wind in various possible worlds is presumably uncorrelated with most of the things you want to know about, whereas [clever arguers](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence) who [don't tell explicit lies](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases) are constrained in how much they can mislead you. But it seems plausible that you might as well listen to any other arbitrary smart person with a blue check and 20K Twitter followers. It might be a useful exercise, for Yudkowsky to think of what he would _actually say_ if someone with social power _actually did this to him_ when he was trying to use language to reason about Something he had to Protect? -(Note, my claim here is _not_ that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative. Rather, I'm saying that whether "You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" makes sense _as a response to_ "X isn't a Y" shouldn't depend on the specific values of X and Y. Yudkowsky's behavior the other month had made it look like he thought that "You're not standing in defense of truth if ..." _was_ a valid response when, say, X = "[Caitlyn Jenner](https://en.wikipedia.org/wiki/Caitlyn_Jenner)" and Y = "woman." I was saying that whether or not it's a valid response, we should, as a matter of [local validity](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), apply the _same_ standard when X = "Scott Alexander" and Y = "racist.") +(Note, my claim here is _not_ that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative. Rather, I'm saying that whether "You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" makes sense _as a response to_ "_X_ isn't a _Y_" shouldn't depend on the specific values of _X_ and _Y_. Yudkowsky's behavior the other month had made it look like he thought that "You're not standing in defense of truth if ..." _was_ a valid response when, say, _X_ = "[Caitlyn Jenner](https://en.wikipedia.org/wiki/Caitlyn_Jenner)" and _Y_ = "woman." I was saying that whether or not it's a valid response, we should, as a matter of [local validity](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), apply the _same_ standard when _X_ = "Scott Alexander" and _Y_ = "racist.") Without disclosing any specific content from private conversations that may or may not have happened, I can say that our posse did not get the kind of engagement from Yudkowsky that we were hoping for. @@ -407,7 +409,7 @@ Meanwhile, my email thread with Scott started up again. I expressed regret that One of Alexander's [most popular _Less Wrong_ posts ever had been about the noncentral fallacy, which Alexander called "the worst argument in the world"](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world): those who (for example) crow that abortion is _murder_ (because murder is the killing of a human being), or that Martin Luther King, Jr. was a _criminal_ (because he defied the segregation laws of the South), are engaging in a dishonest rhetorical maneuver in which they're trying to trick their audience into assigning attributes of the typical "murder" or "criminal" to what are very noncentral members of those categories. -Even if you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Diane a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations on the basis of what the typical "murder" is like—expectations about Diane's moral character, about the suffering of a victim whose hopes and dreams were cut short, about Diane's relationship with the law, _&c._—most of which get violated when you reveal that the murder victim was an embryo. +Even if you're opposed to abortion, or have negative views about the historical legacy of Dr. King, this isn't the right way to argue. If you call Fiona a _murderer_, that causes me to form a whole bunch of implicit probabilistic expectations on the basis of what the typical "murder" is like—expectations about Fiona's moral character, about the suffering of a victim whose hopes and dreams were cut short, about Fiona's relationship with the law, _&c._—most of which get violated when you reveal that the murder victim was an embryo. In the form of a series of short parables, I tried to point out that Alexander's own "The Worst Argument in the World" is complaining about the _same_ category-gerrymandering move that his "... Not Man for the Categories" comes out in favor of. We would not let someone get away with declaring, "I ought to accept an unexpected abortion or two deep inside the conceptual boundaries of what would normally not be considered murder if it'll save someone's life." Maybe abortion _is_ wrong and relevantly similar to the central sense of "murder", but you need to make that case _on the empirical merits_, not by linguistic fiat (Subject: "twelve short stories about language"). @@ -445,7 +447,7 @@ Also, Scott had asked me if it wouldn't be embarrassing if the community solved Also-also, Scott had proposed a super–[Outside View](https://www.lesswrong.com/tag/inside-outside-view) of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes and suggested that I think of _that_ as what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had culture-war PTSD _and_ my concerns had merit. -Michael replied at 5:58 _a.m._, saying that everyone's first priority should be making sure that I could sleep—that given that I was failing to adhere to my commitments to sleep almost immediately after making them, I should be interpreted as urgently needing help, and that Scott had comparative advantage in helping, given that my distress was most centrally over Scott gaslighting me. +Michael replied at 5:58 _a.m._, saying that everyone's first priority should be making sure that I could sleep—that given that I was failing to adhere to my commitments to sleep almost immediately after making them, I should be interpreted as urgently needing help, and that Scott had comparative advantage in helping, given that my distress was most centrally over Scott gaslighting me, asking me to consider the possibility that I was wrong while visibly not considering the same possibility regarding himself. That seemed a little harsh on Scott to me. At 6:14 _a.m._ and 6:21 _a.m._, I wrote a couple emails to everyone that my plan was to get a train back to my own apartment to sleep, that I was sorry for making such a fuss despite being incentivizable while emotionally distressed, that I should be punished in accordance with the moral law for sending too many hysterical emails because I thought I could get away with it, that I didn't need Scott's help, and that I thought Michael was being a little aggressive about that, but that I guessed that's also kind of Michael's style. @@ -457,9 +459,9 @@ Anyway, I did get to my apartment and sleep for a few hours. One of the other fr At some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. Ben pointed out that [making oneself mentally ill in order to extract political concessions](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) only works if you have a lot of people doing it in a visibly coordinated way—and even if it did work, getting into a dysphoria contest with trans people didn't seem like it led anywhere good. -I supposed that in Michael's worldview, aggression is more honest than passive-aggression. That seemed true, but I was psychologically limited in how much overt aggression I was willing to deploy against my friends. (And particularly Yudkowsky, whom I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "Losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 _a.m._ and then calling Michael "aggressive" when he came to defend me was just crazy: either one of those things could make sense, but not both.) +I supposed that in Michael's worldview, aggression is more honest than passive-aggression. That seemed true, but I was psychologically limited in how much overt aggression I was willing to deploy against my friends. (And particularly Yudkowsky, whom I still hero-worshiped.) But clearly, the tension between "I don't want to do too much social aggression" and "Losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 _a.m._ and then calling Michael "aggressive" when he came to defend me was just crazy: either one of those things could make sense, but not both.) -Did I just need to accept that was no such a thing as a "rationalist community"? (Sarah had told me as much two years ago while tripsitting me during my psychosis relapse, but I hadn't made the corresponing mental adjustments.) +Did I just need to accept that was no such a thing as a "rationalist community"? (Sarah had told me as much two years ago while tripsitting me during my psychosis relapse, but I hadn't made the corresponding mental adjustments.) On the other hand, a possible reason to be attached to the "rationalist" brand name and social identity that wasn't just me being stupid was that _the way I talk_ had been trained really hard on this subculture for _ten years_. Most of my emails during this whole campaign had contained multiple Sequences or _Slate Star Codex_ links that I could expect the recipients to have read. I could use [the phrase "Absolute Denial Macro"](https://www.lesswrong.com/posts/t2NN6JwMFaqANuLqH/the-strangest-thing-an-ai-could-tell-you) in conversation and expect to be understood. If I gave up on the "rationalists" being a thing, and went out into the world to make friends with _Quillette_ readers or arbitrary University of Chicago graduates, then I would lose all that accumulated capital. Here, I had a massive home territory advantage because I could appeal to Yudkowsky's writings about the philosophy of language from ten years ago and people couldn't say, "Eliezer _who?_ He's probably a Bad Man." @@ -509,7 +511,7 @@ Concerning what others were thinking: on Discord in January, Kelsey Piper had to I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey _thought_ she agreed with Scott, but actually didn't, that sham consensus was a bad sign for our collective sanity, wasn't it? -As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the orcs themselves. For one thing, how do you _know_ that serving evil-Melkor is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you misleading information about that? +As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the orcs themselves. For one thing, how do you _know_ that serving evil-Melkor is a life of constant torture? Is it at all possible that someone has given you misleading information about that? Moreover, you _can't_ just give an orc a clever misinterpretation of an oath and have them believe it. First you have to [cripple their _general_ ability](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) to correctly interpret oaths, for the same reason that you can't get someone to believe that 2+2=5 without crippling their general ability to do arithmetic. We weren't talking about a little "white lie" that the listener will never get to see falsified (like telling someone their dead dog is in heaven); the orcs already know the text of the oath, and you have to break their ability to _understand_ it. Are you willing to permanently damage an orc's ability to reason in order to save them pain? For some sufficiently large amount of pain, surely. But this isn't a choice to make lightly—and the choices people make to satisfy their own consciences don't always line up with the volition of their alleged beneficiaries. We think we can lie to save others from pain, without wanting to be lied to ourselves. But behind the veil of ignorance, it's the same choice! diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 1bdb8f8..6980c57 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1,12 +1,10 @@ pt. 2 near editing tier— -_ alphabet bump -_ Ziz and Gwen archive -_ Ziz chocolate anecdote -_ explain Michael's gaslighting charge, using the "bowels of Christ" language -_ the function of privacy norms is to protect you from people who want to selectively reveal information to hurt you, so it makes sense that I'm particularly careful about Yudkowsky's privacy and not Scott's, because I totally am trying to hurt Yudkowsky (this also protects me from the charge that by granting more privacy to Yudkowsky than Scott, I'm implying that Yudkowsky said something more incriminating; the difference in treatment is about _me_ and my expectations, rather than what they may or may not have said when I tried emailing them); I want it to be clear that I'm attacking him but not betraying him -_ mention my "trembling hand" history with secrets, not just that I don't like it -_ Eric Weinstein, who was not making this mistake -_ I claim that I'm not doing much psychologizing because implausible to be simultaenously savvy enough to say this, and naive enough to not be doing so knowingly +✓ alphabet bump +✓ Ziz and Gwen archive +✓ explain Michael's gaslighting charge +✓ attacking him but not betraying him +✓ Ziz chocolate anecdote/history +✓ I claim that I'm not doing much psychologizing because implausible to be simultaenously savvy enough to say this, and naive enough to not be doing so knowingly ------ -- 2.17.1