From: M. Taylor Saotome-Westlake Date: Mon, 22 Aug 2022 01:12:14 +0000 (-0700) Subject: memoir: aborting Michael's intervention, proton concession X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=f0735b6d188a59d63003a16efc7540ee05d4259c;p=Ultimately_Untrue_Thought.git memoir: aborting Michael's intervention, proton concession --- diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index f444644..3d27a9c 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -164,7 +164,7 @@ Of course, such speech restrictions aren't necessarily "irrational", depending o In contrast, by claiming to be "not taking a stand for or against any Twitter policies" while insinuating that people who oppose the policy are ontologically confused, Yudkowsky was being either (somewhat implausibly) stupid or (more plausibly) intellectually dishonest: of _course_ the point of speech codes is suppress ideas! Given that the distinction between facts and policies is so obviously _not anyone's crux_—the smarter people in the "anti-trans" faction already know that, and the dumber people in the faction wouldn't change their alignment if they were taught—it's hard to see what the _point_ of harping on the fact/policy distiction would be, _except_ to be seen as implicitly taking a stand for the "pro-trans" faction, while [putting on a show of being politically "neutral."](https://www.lesswrong.com/posts/jeyvzALDbjdjjv5RW/pretending-to-be-wise) -It makes sense that Yudkowsky might perceive political constraints on what he might want to say in public—especially when you look at what happened to the _other_ Harry Potter author. (Despite my misgivings—and the fact that at this point it's more of a genre convention or a running joke, rather than any attempt at all to conceal my identity—this blog _is_ still published under a pseudonym; it would be hypocritical of me to accuse someone of cowardice about what they're willing to attach their real name to.) +It makes sense that Yudkowsky might perceive political constraints on what he might want to say in public—especially when you look at [what happened to the _other_ Harry Potter author](https://en.wikipedia.org/wiki/Politics_of_J._K._Rowling#Transgender_people). (Despite my misgivings—and the fact that at this point it's more of a genre convention or a running joke, rather than any attempt at all to conceal my identity—this blog _is_ still published under a pseudonym; it would be hypocritical of me to accuse someone of cowardice about what they're willing to attach their real name to.) But if Yudkowsky didn't want to get into a distracting political fight about a topic, then maybe the responsible thing to do would have been to just not say anything about the topic, rather than engaging with the _stupid_ version of the opposition and stonewalling with "That's a policy question" when people tried to point out the problem?! @@ -192,7 +192,7 @@ As for the attempt to intervene on Yudkowsky—well, [again](/2022/TODO/blanchar > I was sent this (by a third party) as a possible example of the sort of argument I was looking to read: [http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/). Without yet judging its empirical content, I agree that it is not ontologically confused. It's not going "But this is a MAN so using 'she' is LYING." -Look at that! The great Eliezer Yudkowsky said that my position is not ontologically confused. That's _probably_ high praise coming from him! You might think that should be the end of the matter. Yudkowsky denounced a particular philosophical confusion, I already had a related objection written up, and he acknowledged my objection as not being the confusion he was trying to police. I _should_ be satisfied, right? +Look at that! The great Eliezer Yudkowsky said that my position is not ontologically confused. That's _probably_ high praise coming from him! You might think that should have been the end of the matter. Yudkowsky denounced a particular philosophical confusion, I already had a related objection written up, and he acknowledged my objection as not being the confusion he was trying to police. I _should_ be satisfied, right? I wasn't, in fact, satisfied. This little "not ontologically confused" clarification buried in the replies was _much less visible_ than the bombastic, arrogant top level pronouncement insinuating that resistance to gender-identity claims _was_ confused. (1 Like on this reply, _vs._ 140 Likes/21 Retweets on start of thread.) I expected that the typical reader who had gotten the impression from the initial thread that Yudkowsky thought that gender-identity skeptics didn't have a leg to stand on, would not, actually, be disabused of this impression by the existence of this little follow-up. Was it greedy of me to want something _louder_? @@ -244,13 +244,13 @@ My _hope_ was that it was possible to apply just enough "What kind of rationalis There's a view that assumes that as long as everyone is being cordial, our truthseeking public discussion must be basically on-track: if no one overtly gets huffily offended and calls to burn the heretic, then the discussion isn't being warped by the fear of heresy. -I do not hold this view. I think there's a _subtler_ failure mode where people know what the politically-favored bottom line is, and collude to ignore, nitpick, or just be targetedly _uninterested_ in any fact or line of argument that doesn't fit the party line. I want to distinguish between direct ideological conformity enforcement attempts, and "people not living up to their usual epistemic standards in response to ideological conformity enforcement in the general culture they're embedded in." +I do not hold this view. I think there's a _subtler_ failure mode where people know what the politically-favored bottom line is, and collude to ignore, nitpick, or just be targetedly _uninterested_ in any fact or line of argument that doesn't fit the party line. I want to distinguish between direct ideological conformity enforcement attempts, and "people not living up to their usual epistemic standards in response to ideological conformity enforcement in the general culture they're embedded in." -Especially compared to normal Berkeley, I had to give the Berkeley "rationalists" credit for being _very good_ at free speech norms. (I'm not sure I would be saying this in the world where Scott Alexander didn't have a traumatizing experience with social justice in college, causing him to dump a ton of anti-social-justice, pro-argumentative-charity antibodies in the "rationalist" collective "water supply" after he became our subculture's premier writer. But it was true in _our_ world.) I didn't want to fall into the [bravery-debate](http://slatestarcodex.com/2013/05/18/against-bravery-debates/) trap of, "Look at me, I'm so heroically persecuted, therefore I'm right (therefore you should have sex with me)". I wasn't angry at the "rationalists" for being silenced or shouted down (which I wasn't); I was angry at them for _making bad arguments_ and systematically refusing to engage with the obvious counterarguments when they're made. +Especially compared to normal Berkeley, I had to give the Berkeley "rationalists" credit for being _very good_ at free speech norms. (I'm not sure I would be saying this in the world where Scott Alexander didn't have a [traumatizing experience with social justice in college](https://slatestarcodex.com/2014/01/12/a-response-to-apophemi-on-triggers/), causing him to dump a ton of anti-social-justice, pro-argumentative-charity antibodies in the "rationalist" collective "water supply" after he became our subculture's premier writer. But it was true in _our_ world.) I didn't want to fall into the [bravery-debate](http://slatestarcodex.com/2013/05/18/against-bravery-debates/) trap of, "Look at me, I'm so heroically persecuted, therefore I'm right (therefore you should have sex with me)". I wasn't angry at the "rationalists" for being silenced or shouted down (which I wasn't); I was angry at them for _making bad arguments_ and systematically refusing to engage with the obvious counterarguments when they're made. Ben thought I was wrong to think of this as non-ostracisizing. The deluge of motivated nitpicking _is_ an implied marginalization threat, he explained: the game people are playing when they do that is to force me to choose between doing arbitarily large amounts of interpretive labor, or being cast as never having answered these construed-as-reasonable objections, and therefore over time losing standing to make the claim, being thought of as unreasonable, not getting invited to events, _&c._ -I saw the dynamic he was pointing at, but as a matter of personality, I was more inclined to respond, "Welp, I guess I need to write faster and more clearly", rather than to say "You're dishonestly demanding arbitrarily large amounts of interpretive labor from me." I thought Ben was far too quick to give up on people who he modeled as trying not to understand, whereas I continued to have faith in the possibility of _making_ them understand if I just never gave up. Not to be _so_ much of a scrub as to play chess with a pigeon (which shits on the board and then struts around like it's won), or wrestle with a pig (which gets you both dirty, and the pig likes it), or dispute what the Tortise said to Achilles—but to hold out hope that people in "the community" could only be _boundedly_ motivatedly dense, and anyway that giving up wouldn't make me a stronger writer. +I saw the dynamic he was pointing at, but as a matter of personality, I was more inclined to respond, "Welp, I guess I need to write faster and more clearly", rather than to say, "You're dishonestly demanding arbitrarily large amounts of interpretive labor from me." I thought Ben was far too quick to give up on people who he modeled as trying not to understand, whereas I continued to have faith in the possibility of _making_ them understand if I just never gave up. Not to be _so_ much of a scrub as to play chess with a pigeon (which shits on the board and then struts around like it's won), or wrestle with a pig (which gets you both dirty, and the pig likes it), or dispute what the Tortise said to Achilles—but to hold out hope that people in "the community" could only be _boundedly_ motivatedly dense, and anyway that giving up wouldn't make me a stronger writer. (Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of interpretive labor!") @@ -258,21 +258,21 @@ Ben thought that making them understand was hopeless and that becoming a stronge (I guess I'm only now, after spending an additional three years exhausting every possible line of argument, taking Ben's advice on this by writing this memoir. Sorry, Ben—and thanks.) -One thing I regret about my behavior during this period was the extent to which I was emotionally dependent on my posse, and in some ways particularly Michael, for validation. I remembered Michael as a high-status community elder back in the _Overcoming Bias_ era (to the extent that there was a "community" in those early days). I had been somewhat skeptical of him, then: the guy makes a lot of stridently "out there" assertions by the standards of ordinary social reality, in a way that makes you assume he must be speaking metaphorically. (He always insists that he's being completely literal.) But he had social proof as the President of the Singularity Institute—the "people person" of our world-saving effort, to complement Yudkowsky's anti-social mad scientist personality—so I had been inclined to take his "crazy"-sounding assertions more charitably than I would have in the absence of that social proof. +One thing I regret about my behavior during this period was the extent to which I was emotionally dependent on my posse, and in some ways particularly Michael, for validation. I remembered Michael as a high-status community elder back in the _Overcoming Bias_ era (to the extent that there was a "community" in those early days). I had been somewhat skeptical of him, then: the guy makes a lot of stridently "out there" assertions by the standards of ordinary social reality, in a way that makes you assume he must be speaking metaphorically. (He always insists that he's being completely literal.) But he had social proof as the President of the Singularity Institute—the "people person" of our world-saving effort, to complement Yudkowsky's anti-social mad scientist personality—which inclined me to take his "crazy"-sounding assertions more charitably than I otherwise would have. -Now, the memory of that social proof was a lifeline. Dear reader, if you've never been in the position of disagreeing with the entire weight of Society's educated opinion, _including_ your idiosyncratic subculture that tells itself a story about being smarter than the surrounding the Society—well, it's stressful. [There was a comment on /r/slatestarcodex around this time](https://old.reddit.com/r/slatestarcodex/comments/anvwr8/experts_in_any_given_field_how_would_you_say_the/eg1ga9a/) that cited Yudkowsky, Alexander, Ozy, _The Unit of Caring_, and Rob Bensinger as leaders of the "rationalist" community—just an arbitrary Reddit comment of no significance whatsoever—but it was salient indicator of the _Zeitgeist_ to me, because _[every](https://twitter.com/ESYudkowsky/status/1067183500216811521) [single](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) [one](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/) of [those](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) [people](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447)_ had tried to get away with some variant on the "categories are subjective, therefore you have no gounds to object to the claim that trans women are women" _mind game_. +Now, the memory of that social proof was a lifeline. Dear reader, if you've never been in the position of disagreeing with the entire weight of Society's educated opinion, _including_ your idiosyncratic subculture that tells itself a story about being smarter than the surrounding the Society—well, it's stressful. [There was a comment on /r/slatestarcodex around this time](https://old.reddit.com/r/slatestarcodex/comments/anvwr8/experts_in_any_given_field_how_would_you_say_the/eg1ga9a/) that cited Yudkowsky, Alexander, Ozy, _The Unit of Caring_, and Rob Bensinger as leaders of the "rationalist" community—just an arbitrary Reddit comment of no significance whatsoever—but it was salient indicator of the _Zeitgeist_ to me, because _[every](https://twitter.com/ESYudkowsky/status/1067183500216811521) [single](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) [one](https://thingofthings.wordpress.com/2018/06/18/man-should-allocate-some-more-categories/) of [those](https://theunitofcaring.tumblr.com/post/171986501376/your-post-on-definition-of-gender-and-woman-and) [people](https://www.facebook.com/robbensinger/posts/10158073223040447?comment_id=10158073685825447&reply_comment_id=10158074093570447)_ had tried to get away with some variant on the "word usage is subjective, therefore you have no gounds to object to the claim that trans women are women" _mind game_. -In the face of that juggernaut of received opinion, I was already feeling pretty gaslighted. ("We ... we had a whole Sequence about this. Didn't we? And, and ... [_you_ were there](https://tvtropes.org/pmwiki/pmwiki.php/Main/AndYouWereThere), and _you_ were there ... It—really happened, right? I didn't just imagine it? The [hyperlinks](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) [still](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) [work](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) ...") I don't know how my mind would have held up intact if I were just facing it alone; it's hard to imagine what I would have done in that case. I definitely wouldn't have had the impudence to pester Scott and Eliezer the way I did—especially Eliezer—if it was just me alone against everyone else. +In the face of that juggernaut of received opinion, I was already feeling pretty gaslighted. ("We ... we had a whole Sequence about this. Didn't we? And, and ... [_you_ were there](https://tvtropes.org/pmwiki/pmwiki.php/Main/AndYouWereThere), and _you_ were there ... It—really happened, right? I didn't just imagine it? The [hyperlinks](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) [still](https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary) [work](https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace) ...") I don't know how I would have held up intact if I were just facing it alone; it's hard to imagine what I would have done in that case. I definitely wouldn't have had the impudence to pester Scott and Eliezer the way I did—especially Eliezer—if it was just me alone against everyone else. -But _Michael thought I was in the right_—not just intellectually on the philosophy issue, but morally in the right to be _prosecuting_ the philosophy issue, and not accepting stonewalling as an answer. That social proof gave me a lot of social bravery that I otherwise wouldn't have been able to muster up—even though it would have been better if I could have propagated the implications of the observation that my dependence on him was self-undermining, because Michael himself said that the thing that made me valuable was my ability to think independently. +But _Michael thought I was in the right_—not just intellectually on the philosophy issue, but morally in the right to be _prosecuting_ the philosophy issue with our leaders, and not accepting stonewalling as an answer. That social proof gave me a lot of bravery that I otherwise wouldn't have been able to muster up—even though it would have been better if I could have propagated the implications of the observation that my dependence on him was self-undermining, because Michael himself said that the thing that made me valuable was my ability to think independently. -The social proof was probably more effective in my own head, than it was with anyone we were arguing with. _I remembered_ Michael as a high-status community elder back in the _Overcoming Bias_ era, but that was a long time ago. (Luke Muelhauser had taken over leadership of the Singularity Institute in 2011; some sort of rift between Michael and Eliezer had widened in recent years, the details of which had never been explained to me.) Michael's status in "the community" of 2019 was much more mixed. He was intensely critical of the rise of Effective Altruism. (I remember at a party in 2015, on asking Michael what else I should spend my San Francisco software engineer money on if not the EA charities I was considering, being surprised that his answer was, "You.") +The social proof was probably more effective in my own head, than it was with anyone we were arguing with. _I remembered_ Michael as a high-status community elder back in the _Overcoming Bias_ era, but that was a long time ago. (Luke Muelhauser had taken over leadership of the Singularity Institute in 2011; some sort of rift between Michael and Eliezer had widened in recent years, the details of which had never been explained to me.) Michael's status in "the community" of 2019 was much more mixed. He was intensely critical of the rise of Effective Altruism, which he saw as preying on the energies of the smartest and most scrupulous people with bogus claims about how to do good in the world. (I remember at a party in 2015, on asking Michael what else I should spend my San Francisco software engineer money on, if not the EA charities I was considering, being surprised that his answer was, "You.") Another blow to Michael's "community" reputation was dealt on 27 February, when Anna [published a comment badmouthing Michael and suggesting that talking to him was harmful](https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out?commentId=JLpyLwR2afav2xsyD), which I found pretty disappointing—more so as I began to realize the implications. I agreed with her point about how "ridicule of obviously-fallacious reasoning plays an important role in discerning which thinkers can (or can't) help fill these functions." That's why I was so heartbroken about about the "categories are arbitrary, therefore trans women are women" thing, which deserved to be _laughed out the room_. Why was she trying to ostracize the guy who was one of the very few to back me up on this incredibly obvious thing!? The reasons to discredit Michael given in the comment seemed incredibly weak. (He ... flatters people? He ... _didn't_ tell people to abandon their careers? What?) And the evidence against Michael she offered in private didn't seem much more compelling (_e.g._, at a CfAR event, he had been insistent on continuing to talk to someone who Anna thought was looking sleep-deprived and needed a break). -It made sense for Anna to not like Michael, because of his personal conduct, or because he didn't like EA. (Expecting all of my friends to be friends with _each other_ would be [Geek Social Fallacy #4](http://www.plausiblydeniable.com/opinion/gsf.html).) If she didn't want to invite him to CfAR stuff, fine; that's her business not to invite him. But what did she gain from _escalating_ to publicly denouncing him as someone whose "lies/manipulations can sometimes disrupt [people's] thinking for long and costly periods of time"?! +It made sense for Anna to not like Michael, because of his personal conduct, or because he didn't like EA. (Expecting all of my friends to be friends with _each other_ would be [Geek Social Fallacy #4](http://www.plausiblydeniable.com/opinion/gsf.html).) If she didn't want to invite him to CfAR stuff, fine; that's her business not to invite him. But what did she gain from _escalating_ to publicly denouncing him as someone whose "lies/manipulations can sometimes disrupt [people's] thinking for long and costly periods of time"?! She said that she was trying to undo the effects of her previous endorsements of him, and that the comment seemed like it ought to be okay by Michael's standards (which didn't include an expectation that people should collude to protect each other's reputation). ----- @@ -284,17 +284,15 @@ Anyway, I wasn't the only one whose life was being disrupted by political drama I found Yudkowsky's use of the word "lie" here interesting given his earlier eagerness to police the use of the word "lie" by gender-identity skeptics. With the support of my posse, wrote to him again, a third time (Subject: "on defending against 'alt-right' categorization"). -Imagine if someone were to reply: "Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (explicit advocacy of a white ethnostate?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret. You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language like 'alt-right' being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to." +Imagine if one of Alexander's critics were to reply: "Using language in a way _you_ dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (explicit advocacy of a white ethnostate?) is not what the speech is meant to convey—and this is known to everyone involved, it is not a secret. You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language like 'alt-right' being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to." How would Yudkowsky react, if someone said that? _My model_ of Sequences-era 2009!Yudkowsky would say, "This is an incredibly intellectually dishonest attempt to [sneak in connotations](https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations) by performing a categorization and trying to avoid the burden of having to justify it with an [appeal-to-arbitrariness conversation-halter](https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters); go read ['A Human's Guide to Words.'](https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb)" -But I had no idea what 2019!Yudkowsky would say. If the moral of the "hill of meaning in defense of validity" thread had been that the word "lie" was reserved for _per se_ direct falsehoods, well, what direct falsehood was being asserted by Scott's detractors? I didn't think anyone is claiming that, say, Scott _identifies_ as alt-right (not even privately), any more than anyone is claiming that trans women have two X chromosomes. - -Commenters on /r/SneerClub had been pretty explicit in [their](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh0xlgx/) [criticism](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh3jrth/) that the Culture War thread harbored racists (_&c._) and possibly that Scott himself was a secret racist, _with respect to_ a definition of racism that includeed the belief that there are genetically-mediated population differences in the distribution of socially-relevant traits and that this probably has decision-relevant consequences that should be discussable somewhere. +But I had no idea what 2019!Yudkowsky would say. If the moral of the "hill of meaning in defense of validity" thread had been that the word "lie" should be reserved for _per se_ direct falsehoods, well, what direct falsehood was being asserted by Scott's detractors? I didn't think anyone was _claiming_ that, say, Scott _identifies_ as alt-right (not even privately), any more than anyone was claiming that trans women have two X chromosomes. Commenters on /r/SneerClub had been pretty explicit in [their](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh0xlgx/) [criticism](https://old.reddit.com/r/SneerClub/comments/atgejh/rssc_holds_a_funeral_for_the_defunct_culture_war/eh3jrth/) that the Culture War thread harbored racists (_&c._) and possibly that Scott himself was a secret racist, _with respect to_ a definition of racism that includeed the belief that there exist genetically-mediated population differences in the distribution of socially-relevant traits and that this probably has decision-relevant consequences that should be discussable somewhere. And this was just _correct_. For example, Alexander's ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) favorably cites Cochran _et al._'s genetic theory of Ashkenazi achievement as "really compelling." Scott was almost certainly "guilty" of the category-membership that the speech against him was meant to convey—it's just that Sneer Club got to choose the category. The correct response to the existence of a machine-learning classifer that returns positive on both Scott Alexander and Richard Spencer is not that the classifier is "lying" (what would that even mean?), but that the classifier is not very useful for understanding Scott Alexander's effects on the world. -Of course, Scott was great, and we should defend him from the bastards trying to ruin his reputation, and it's plausible that the most politically convenient way to do that was to pound the table and call them lying sociopaths rather than engaging with the substance of their claims, much as how someone being tried under an unjust law might dishonestly plead "Not guilty" to save their own skin rather than tell the whole truth and hope for jury nullification. +Of course, Scott is great, and we should defend him from the bastards trying to ruin his reputation, and it was plausible that the most politically convenient way to do that was to pound the table and call them lying sociopaths rather than engaging with the substance of their claims—much as how someone being tried under an unjust law might dishonestly plead "Not guilty" to save their own skin rather than tell the whole truth and hope for jury nullification. But, I argued, political convenience came at a dire cost to [our common interest](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). There was a proverb Yudkowsky [had once failed to Google](https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings), which ran something like, "Once someone is known to be a liar, you might as well listen to the whistling of the wind." @@ -302,13 +300,13 @@ Similarly, once someone is known to [vary](https://slatestarcodex.com/2014/08/14 Well, you're still _somewhat_ better off listening to them than the whistling of the wind, because the wind in various possible worlds is presumably uncorrelated with most of the things you want to know about, whereas [clever arguers](https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence) who [don't tell explicit lies](https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/) are constrained in how much they can mislead you. But it seems plausible that you might as well listen to any other arbitrary smart person with a bluecheck and 20K followers. I know you're very busy; I know your work's important—but it might be a useful exercise, for Yudkowsky to think of what he would _actually say_ if someone with social power _actually did this to him_ when he was trying to use language to reason about Something he had to Protect? -(Note, my claim here is _not_ that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative. Rather, I'm saying that, as a matter of [local validity](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), whether "You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" makes sense _as a response to_ "X isn't a Y" shouldn't depend on the specific values of X and Y. Yudkowsky's behavior the other month made it look like he thought that "You're not standing in defense of truth if ..." _was_ a valid response when, say, X = "Caitlyn Jenner" and Y = "woman." I was saying that, whether or not it's a valid response, we should, as a matter of local validity, apply the _same_ standard when X = "Scott Alexander" and Y = "racist.") +(Note, my claim here is _not_ that "Pronouns aren't lies" and "Scott Alexander is not a racist" are similarly misinformative. Rather, I'm saying that whether "You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning" makes sense _as a response to_ "X isn't a Y" shouldn't depend on the specific values of X and Y. Yudkowsky's behavior the other month made it look like he thought that "You're not standing in defense of truth if ..." _was_ a valid response when, say, X = "Caitlyn Jenner" and Y = "woman." I was saying that, whether or not it's a valid response, we should, as a matter of [local validity](https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization), apply the _same_ standard when X = "Scott Alexander" and Y = "racist.") Anyway, without disclosing any _specific content_ from private conversations with Yudkowsky that may or may not have happened, I think I _am_ allowed to say that our posse did not get the kind of engagement from Yudkowsky that we were hoping for. (That is, I'm Glomarizing over whether Yudkowsky just didn't reply, or whether he did reply and our posse was not satisfied with the response.) Michael said that it seemed important that, if we thought Yudkowsky wasn't interested, we should have common knowledge among ourselves that we consider him to be choosing to be a cult leader. -Meanwhile, my email thread with Scott got started back up again, although I wasn't expecting anything to come out of it. I expressed some regret that all the times I had emailed him over the past couple years had been when I was upset about something (like psych hospitals, or—something else) and wanted something from him, which was bad, because it was treating him as a means rather than an end—and then, despite that regret, continued prosecuting the argument. +Meanwhile, my email thread with Scott got started back up again, although I wasn't expecting anything to come out of it. I expressed some regret that all the times I had emailed him over the past couple years had been when I was upset about something (like psych hospitals, or—something else) and wanted something from him, which was bad, because it was treating him as a means rather than an end—and then, despite that regret, continued prosecuting the argument (Subject: "twelve short stories about language"). One of Alexander's [most popular _Less Wrong_ posts ever had been about the noncentral fallacy, which Alexander called "the worst argument in the world"](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world): for example, those who crow that abortion is _murder_ (because murder is the killing of a human being), or that Martin Luther King, Jr. was a _criminal_ (because he defied the segregation laws of the South), are engaging in a dishonest rhetorical maneuver in which they're trying to trick their audience into attributing attributes of the typical "murder" or "criminal" onto what are very noncentral members of those categories. @@ -320,13 +318,13 @@ Thus, we see that Alexander's own "The Worst Argument in the World" is really co ... Scott didn't want to meet. At this point, I considered resorting to the tool of cheerful prices again, which I hadn't yet used against Scott—to say, "That's totally understandable! Would a financial incentive change your decision? For a two-hour meeting, I'd be happy to pay up to $4000 to you or your preferred charity. If you don't want the money, then sure, yes, let's table this. I hope you're having a good day." But that seemed sufficiently psychologically coercive and socially weird that I wasn't sure I wanted to go there. I emailed my posse asking what they thought—and then added that maybe they shouldn't reply until Friday, because it was Monday, and I really needed to focus on my dayjob that week. -This is the part where I began to ... overheat. I tried ("tried") to focus on my dayjob, but I was just _so angry_. Did Scott _really_ not understand the rationality-relevant distinction between "value-dependent categories as a result of only running your clustering algorithm on the subspace of the configuration space spanned by the variables that are relevant to your decisions" (as explained by the _dagim_/water-dwellers _vs._ fish example) and "value-dependent categories _in order to not make my friends sad_"? I thought I was pretty explicit about this? Was Scott _really_ that dumb?? Or is it that he was only verbal-smart and this is the sort of thing that only makes sense if you've ever been good at linear algebra?? Did I need to write a post explaining just that one point in mathematical detail? (With executable code and a worked example with entropy calculations.) +This is the part where I began to ... overheat. I tried ("tried") to focus on my dayjob, but I was just _so angry_. Did Scott _really_ not understand the rationality-relevant distinction between "value-dependent categories as a result of caring about predicting different variables" (as explained by the _dagim_/water-dwellers _vs._ fish example) and "value-dependent categories _in order to not make my friends sad_"? I thought I was pretty explicit about this? Was Scott _really_ that dumb?? Or is it that he was only verbal-smart and this is the sort of thing that only makes sense if you've ever been good at linear algebra?? (Such that the language of "only running your clustering algorithm on the subspace of the configuration space spanned by the variables that are relevant to your decisions" would come naturally.) Did I need to write a post explaining just that one point in mathematical detail? (With executable code and a worked example with entropy calculations.) My dayjob boss made it clear that he was expecting me to have code for my current Jira tickets by noon the next day, so I resigned myself to stay at the office late to finish that. But I was just in so much (psychological) pain. Or at least—as I noted in one of a series of emails to my posse that night—I felt motivated to type the sentence, "I'm in so much (psychological) pain." I'm never sure how to intepret my own self-reports, because even when I'm really emotionally trashed (crying, shaking, randomly yelling, _&c_.), I think I'm still noticeably _incentivizable_: if someone were to present a credible threat (like slapping me and telling me to snap out of it), then I would be able to calm down: there's some sort of game-theory algorithm in the brain that subjectively feels genuine distress (like crying or sending people too many hysterical emails) but only when it can predict that it will be either rewarded with sympathy or at least tolerated. (Kevin Simler: [tears are a discount on friendship](https://meltingasphalt.com/tears/).) -I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) to summarize how I felt (the mention of @ESYudkowsky being to attribute credit, I told myself; I figured Yudkowsky had enough followers that he probably wouldn't see a notification): +I [tweeted a Sequences quote](https://twitter.com/zackmdavis/status/1107874587822297089) (the mention of @ESYudkowsky being to attribute credit, I told myself; I figured Yudkowsky had enough followers that he probably wouldn't see a notification): > "—and if you still have something to protect, so that you MUST keep going, and CANNOT resign and wisely acknowledge the limitations of rationality— [1/3] > @@ -340,20 +338,21 @@ I did, eventually, get some dayjob work done that night, but I didn't finish the I sent an email explaining this to Scott and my posse and two other friends (Subject: "predictably bad ideas"). -Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist that it's OK to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is OK ... why doesn't that just generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed much crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then, why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for my sanity! Did that matter at all? Also, Scott had asked me if it wouldn't be embarrassing, if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff? But the _original reason_ it had ever seemed _remotely_ plausible that we would create Utopia forever wasn't "because we're us, the self-designated world-saving good guys", but because we were going to perfect an art of _systematically correct reasoning_. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the _reason_ that it was plausible that we would create Utopia forever; you can't just forfeit the mandate of Heaven like that and still expect to still rule China. Also, Scott had proposed a super-Outside View of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes in people, and suggested that I think of _that_ was what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harrass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had the culture-war PTSD that he propsed, _and_ that my concerns have merit. +Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist that it's okay to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is okay ... why doesn't that just generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed much crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then, why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for my sanity! Did that matter at all? Also, Scott had asked me if it wouldn't be embarrassing, if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff? But the _original reason_ it had ever seemed _remotely_ plausible that we would create Utopia forever wasn't "because we're us, the self-designated world-saving good guys", but because we were going to perfect an art of _systematically correct reasoning_. If we're not going to do systematically correct reasoning because that would make people sad, then that undermines the _reason_ that it was plausible that we would create Utopia forever; you can't just forfeit the mandate of Heaven like that and still expect to still rule China. Also, Scott had proposed a super-Outside View of the culture war as an evolutionary process that produces memes optimized to trigger PTSD syndromes in people, and suggested that I think of _that_ as what was happening to me. But, depending on how much credence Scott put in social proof, mightn't the fact that I managed to round up this whole posse to help me repeatedly argue with (or harrass) Yudkowsky shift his estimate over whether my concerns had some objective merit that other people could see, too? It could simultaneously be the case that I had the culture-war PTSD that he propsed, _and_ that my concerns had merit. -[TODO: Michael jumps in to help, I rebuff him, Michael says WTF and calls me, I take a train home, Alicorn visits +Michael replied at 5:58 _a.m._, saying that everyone's first priority should be making sure that I could sleep—that given that I was failing to adhere to my commitments to sleep almost immediately after making them, I should be interpreted as immediately needing help, and that Scott had comparative advantage in helping, given that my distress was most centrally over Scott gaslighting me. -One of the other friends I had cc'd on some of the emails came to visit me with her young son—I mean, her son at the time. -] +That seemed a little harsh on Scott to me. At 6:14 _a.m._ and 6:21 _a.m._, I wrote a couple emails to everyone that my plan was to get a train to get back to my own apartment to sleep, that I was sorry for making such a fuss despite being incentivizable while emotionally distressed, that I should be punished in accordance with the moral law for sending too many hysterical emails because I thought I could get away with it, that I didn't need Scott's help and that I thought Michael was being a little aggressive about that, but I guessed that's also kind of Michael's style? -(Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.) +Michael was _furious_ with me, and he emailed and called me to say so. He seemed to have a theory that people who are behaving badly, as Scott was, will only change when they see a victim who is being harmed. Me escalating and then deescalating just after he came to help was undermining the attempt to force an honest confrontation, such that we could _get_ to the point of having a Society with morality or punishment. -I think at some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But then when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. (Michael seemed to have a theory that people will only change their bad behavior when they see a victim who is being harmed.) +Anyway, I did successfully get to my apartment and get a few hours of sleep. One of the other friends I had cc'd on some of the emails came to visit me later than morning with her young son—I mean, her son at the time. -I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me was just crazy.) +(Incidentally, the code that I wrote intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since, every time someone needs to modify that function and finds it harder to make sense of than it would be if I had been less emotionally overwhelmed in March 2019 and written something sane instead.) -Ben pointed out that [making oneself mentally ill in order to extract political concessions](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) only works if you have a lot of people doing it in a visibly coordinated way. And even if it did work, getting into a dysphoria contest with trans people didn't seem like it led anywhere good. +I think at some level, I wanted Scott to know how frustrated I was about his use of "mental health for trans people" as an Absolute Denial Macro. But then when Michael started advocating on my behalf, I started to minimize my claims because I had a generalized attitude of not wanting to sell myself as a victim. Ben pointed out that [making oneself mentally ill in order to extract political concessions](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) only works if you have a lot of people doing it in a visibly coordinated way. And even if it did work, getting into a dysphoria contest with trans people didn't seem like it led anywhere good. + +I supposed that, in Michael's worldview, aggression is more honest than passive-aggression. That seemed obviously true, but I was psychologically limited in how much overt aggression I was willing to deploy against my friends. (And particularly Yudkowsky, who I still hero-worshipped.) But clearly, the tension between "I don't want to do too much social aggression" and "losing the Category War within the rationalist community is _absolutely unacceptable_" was causing me to make wildly inconsistent decisions. (Emailing Scott at 4 a.m., and then calling Michael "aggressive" when he came to defend me was just crazy: either one of those things could make sense, but not _both_.) Was the answer just that I needed to accept that there wasn't such a thing in the world as a "rationalist community"? (Sarah had told me as much two years ago, at BABSCon, and I just hadn't made the corresponing mental adjustments.) @@ -363,8 +362,29 @@ The language I spoke was _mostly_ educated American English, but I relied on sub Maybe that's why I felt like I had to stand my ground and fight for the world I was made in, even though the contradiction between the war effort and my general submissiveness was having me making crazy decisions. -[TODO SECTION: proton concession - * as it happened, the next day, Wednesday, we got this: https://twitter.com/ESYudkowsky/status/1108277090577600512 (Why now? maybe he saw the "tools have shattered in their hand"; maybe the Quillette article just happened to be timely) +As it happened, the next day, Wednesday, we saw these Tweets from @ESYudkowsky: + +> [Everything more complicated than](https://twitter.com/ESYudkowsky/status/1108277090577600512) protons tends to come in varieties. Hydrogen, for example, has isotopes. Gender dysphoria involves more than one proton and will probably have varieties. + +> [To be clear, I don't](https://twitter.com/ESYudkowsky/status/1108280619014905857) know much about gender dysphoria. There's an allegation that people are reluctant to speciate more than one kind of gender dysphoria. To the extent that's not a strawman, I would say only in a generic way that GD seems liable to have more than one species. + +(Why now? Maybe he saw the tag in my "tools have shattered" Tweet on Monday, or maybe the _Quillette_ article was just timely?) + +The most obvious reading of this is as a "concession" to my agenda. The two-type taxonomy of MtF was the thing I was _originally_ trying to talk about, back in 2016–2017, before getting derailed onto the present philosophy-of-language war, and here Yudkowsky was backing up "my side" on that by publicly offering an argument that there's probably a more-than-one-type typology. + +At this point, some people might think that should have been the end of the matter, that I should have been satisfied. I had started the recent drama flare-up because Yudkowsky had Tweeted something unfavorable to my agenda. Now, Yudkowsky was Tweeting something favorable to my agenda—a major concession! Wouldn't it be _greedy_ and _ungrateful_ for me to keep criticizing him about the pronouns and language thing, given that he'd thrown me a bone here? + +That's not how it works. The entire concept of there being "sides" to which one can make "concessions" is an artifact of human coalitional instincts; it's not something that _actually makes sense_. My posse and I were trying to get a clarification about a philosophy of language claim Yudkowsky had made a few months prior ("you're not standing in defense of truth if [...]"). + +[TODO bookmark: finish section explaining why concessions are not the Way ...] + +This thing about transgender typology was _not the thing we were trying to clarify!_ + +Moreover, + +["On the Argumentative Form 'Super-Proton Things Tend to Come in Varieties'"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) + +[TODO proton concession * A concession! In the war frame, you'd think this would make me happy * "I did you a favor by Tweeting something obliquely favorable to your object-level crusade, and you repay me by criticizing me? How dare you?!" My model of Sequences-era Eliezer-2009 would never do that, because the species-typical arguments-as-social-exchange * do you think Eliezer is thinking, "Fine, if I tweet something obliquely favorable towards Zack's object-level agenda, maybe Michael's gang will leave me alone now" @@ -373,14 +393,6 @@ Maybe that's why I felt like I had to stand my ground and fight for the world I * We need to figure out how to win against bad faith arguments -> [Everything more complicated than](https://twitter.com/ESYudkowsky/status/1108277090577600512) protons tends to come in varieties. Hydrogen, for example, has isotopes. Gender dysphoria involves more than one proton and will probably have varieties. - -> [To be clear, I don't](https://twitter.com/ESYudkowsky/status/1108280619014905857) know much about gender dysphoria. There's an allegation that people are reluctant to speciate more than one kind of gender dysphoria. To the extent that's not a strawman, I would say only in a generic way that GD seems liable to have more than one species. - -There's a sense in which this could be read as a "concession" to my agenda. The two-type taxonomy of MtF _was_ the thing I was originally trying to talk about, before the philosophy-of-language derailing, and here Yudkowsky is backing up "my side" on that by publicly offering an argument that there's probably a more-than-one-type typology. So there's an intuition that I should be grateful for and satisfied with this concession—that it would be _greedy_ for me to keep criticizing him about the pronouns and language thing, given that he's throwing me a bone here. - -But that intuition is _wrong_. The perception that there are "sides" to which one can make "concessions" is an _illusion_ of the human cognitive architecture; it's not something that any sane cognitive process would think in the course of constructing a map that reflects the territory. - As I explained in ["On the Argumentative Form 'Super-Proton Things Tend to Come In Varieties'"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/), this argument that "gender dysphoria involves more than one proton and will probably have varieties" is actually _wrong_. The _reason_ I believe in the two-type taxonomy of MtF is because of [the _empirical_ case that androphilic and non-exclusively-androphilic MtF transsexualism actually look like different things](https://sillyolme.wordpress.com/faq-on-the-science/), enough so for the two-type clustering to [pay the rent](https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences) [for its complexity](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length). But Yudkowsky can't afford to acknowledge the empirical case for the two-type taxonomy—that really _would_ get him in trouble with progressives. So in order to throw me a bone while maintaining his above-it-all [pretending to be wise](https://www.lesswrong.com/posts/jeyvzALDbjdjjv5RW/pretending-to-be-wise) centerist pose, he needs to come up with some other excuse that "exhibit[s] generally rationalist principles". @@ -389,6 +401,7 @@ The lesson here that I wish Yudkowsky would understand is that when you invent r If you "project" my work into the "subspace" of contemporary political conflicts, it usually _codes as_ favoring "anti-trans" faction more often than not, but [that's really not what I'm trying to do](/2021/Sep/i-dont-do-policy/). From my perspective, it's just that the "pro-trans" faction happens to be very wrong about a lot of stuff that I care about. But being wrong about a lot of stuff isn't the same thing as being wrong about everything; it's _important_ that I spontaneously invent and publish pieces like ["On the Argumentative Form"](/2019/Dec/on-the-argumentative-form-super-proton-things-tend-to-come-in-varieties/) and ["Self-Identity Is a Schelling Point"](/2019/Oct/self-identity-is-a-schelling-point/) that "favor" the "pro-trans" faction. That's how you know (and how I know) that I'm not a _partisan hack_. + ] [TODO: Jessica joins the coalition; she tell me about her time at MIRI (link to Zoe-piggyback and Occupational Infohazards); @@ -423,7 +436,7 @@ But _selectively_ creating clarity down but not up power gradients just reinforc Somewhat apologetically, I replied that the distinction between truthfully, publicly criticizing group identities and _named individuals_ still seemed very significant to me? I would be way more comfortable writing [a scathing blog post about the behavior of "rationalists"](/2017/Jan/im-sick-of-being-lied-to/), than about a specific person not adhering to good discourse norms in an email conversation that they had good reason to expect to be private. I thought I was consistent about this: contrast my writing to the way that some anti-trans writers name-and-shame particular individuals. (The closest I had come was [mentioning Danielle Muscato as someone who doesn't pass](/2018/Dec/untitled-metablogging-26-december-2018/#photo-of-danielle-muscato)—and even there, I admitted it was "unclassy" and done in desperation of other ways to make the point having failed.) I had to acknowledge that criticism of non-exclusively-androphilic trans women in general _implied_ criticism of Jessica, and criticism of "rationalists" in general _implied_ criticism of Yudkowsky and Alexander and me, but the extra inferential step and "fog of probability" seemed useful for making the speech act less of an attack? Was I wrong? -Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that more serious threat to his safety. (And rationalists and trans women are exact sort of people that get targeted by the same people to target Jews.) +Michael said this was importantly backwards: less precise targeting is more violent. If someone said, "Michael Vassar is a terrible person", he would try to be curious, but if they don't have an argument, he would tend to worry more "for" them and less "about" them, whereas if someone said, "The Jews are terrible people", he saw that more serious threat to his safety. (And rationalists and trans women are exactly the sort of people that get targeted by the same people who target Jews.) ] diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index be4f7bd..d2fc412 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,8 +1,6 @@ -near editing tier— -_ Anna thought badmouthing Michael was OK by Michael's standards, trying to undo - - with internet available— +_ Quillette link placement in proton concession +_ did that Reddit comment cite "Kelsey" or "TUOC"? _ quote from "Kolmogorov complicity" about everything being connected _ citation/explanation for saying "Peace be unto him" _ link "other Harry Potter author" @@ -10,10 +8,15 @@ _ address the "maybe it's good to be called names" point from "Hill" thread _ quote part of the "Hill" thread emphasizing "it's a policy decision", not just "it's not lying", if there is one besides the "Aristotelian binary" Tweet _ quote "maybe as a matter of policy" secondary Tweet earlier before quote _ 2019 Discord discourse with Alicorner +_ screenshot Rob's Facebook comment which I link far editing tier— +_ edit discussion of "anti-trans" side given that I later emphasize that "sides" shouldn't be a thing +_ clarify why Michael thought Scott was "gaslighting" me, include "beeseech bowels of Christ" _ the right way to explain how I'm respecting Yudkowsky's privacy +_ explain the adversarial pressure on privacy norms +_ first EY contact was asking for public clarification or "I am being silenced" (so Glomarizing over "unsatisfying response" or no response isn't leaking anything Yudkowksy cares about) _ Nov. 2018 continues thread from Oct. 2016 conversation _ better explanation of posse formation _ maybe quote Michael's Nov 2018 texts? @@ -21,14 +24,12 @@ _ clarify sequence of outreach attempts _ clarify existence of a shadow posse member _ mention Nov. 2018 conversation with Ian somehow _ Said on Yudkowsky's retreat to Facebook being bad for him -_ screenshot Rob's Facebook comment which I link _ explain first use of "rationalist" _ explain first use of Center for Applied Rationality _ erasing agency of Michael's friends, construed as a pawn _ chat with "Wilhelm" during March 2019 minor psych episode -_ explain the adversarial pressure on privacy norms -_ first EY contact was asking for public clarification or "I am being silenced" (so Glomarizing over "unsatisfying response" or no response isn't leaking anything Yudkowksy cares about) _ mention the fact that Anna had always taken a "What You Can't Say" strategy +_ when to use first _vs. last names people to consult before publishing, for feedback or right of objection—