From 94a9bd4848c3b3b1ae298979442347d57bd43c89 Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Tue, 14 Nov 2023 21:54:39 -0800 Subject: [PATCH] memoir: pt. 3 red team edits --- .../2023/a-hill-of-validity-in-defense-of-meaning.md | 2 +- .../drafts/if-clarity-seems-like-death-to-them.md | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/content/2023/a-hill-of-validity-in-defense-of-meaning.md b/content/2023/a-hill-of-validity-in-defense-of-meaning.md index 106cc45..17eed17 100644 --- a/content/2023/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/2023/a-hill-of-validity-in-defense-of-meaning.md @@ -328,7 +328,7 @@ Especially compared to normal Berkeley, I had to give the Berkeley "rationalists As an illustrative example, in an argument on Discord in January 2019, I said, "I need the phrase 'actual women' in my expressive vocabulary to talk about the phenomenon where, if transition technology were to improve, then the people we call 'trans women' would want to make use of that technology; I need language that _asymmetrically_ distinguishes between the original thing that already exists without having to try, and the artificial thing that's trying to imitate it to the limits of available technology". -Kelsey Piper replied, "the people getting surgery to have bodies that do 'women' more the way they want are mostly cis women [...] I don't think 'people who'd get surgery to have the ideal female body' cuts anything at the joints." +Kelsey Piper replied, "the people getting surgery to have bodies that do 'women' more the way they want are mostly cis women [...] I don't think 'people who'd get surgery to have the ideal female body' cuts anything at the joints." Another woman said, "'the original thing that already exists without having to try' sounds fake to me" (to the acclaim of four "+1" emoji reactions). diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index d23b3ef..9dc0b18 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -63,7 +63,7 @@ I may have subconsciously pulled off an interesting political maneuver. In my fi > If we can't even get a public consensus from our _de facto_ leadership on something _so basic_ as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's _no point in pretending to have a rationalist community_, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?[^my-price-for-joining] -[^my-price-for-joining]: The Sequences post referenced here, ["Your Price for Joining"](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining), argues that the sort of people who become "rationalists" are too prone to "take their ball and go home" rather than tolerating imperfections in a collective endeavor. To combat this, Yudkowsky proposes a norm: +[^my-price-for-joining]: The Sequences post referenced here, ["Your Price for Joining"](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining), argues that rationalists are too prone to "take their ball and go home" rather than tolerating imperfections in a collective endeavor. To combat this, Yudkowsky proposes a norm: > If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile. @@ -81,7 +81,7 @@ The two-year-old son of Mike and "Meredith" was reportedly saying the next day t And as it happened, on 7 May 2019, Kelsey wrote [a Facebook comment displaying evidence of understanding my thesis](/images/piper-spending_social_capital_on_talking_about_trans_issues.png). -These two datapoints led me to a psychological hypothesis: when people see someone of some value wavering between their coalition and a rival coalition, they're motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford (_cf._ [Upton Sinclair](https://www.goodreads.com/quotes/21810-it-is-difficult-to-get-a-man-to-understand-something)) to not understand the thing about sex being a natural category when it was just me freaking out alone, but "got it" almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes). Maybe my "closing thoughts" email had a similar effect on Yudkowsky, assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later? This probably wouldn't work if you repeated it, or tried to do it consciously? +These two datapoints led me to a psychological hypothesis: when people see someone of some value wavering between their coalition and a rival coalition, they're intuitively motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford to [speak as if she didn't understand the thing about sex being a natural category](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#people-who-would-get-surgery-to-have-the-ideal-female-body) when it was just me freaking out alone, but visibly got it almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes). Maybe my "closing thoughts" email had a similar effect on Yudkowsky, assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later? This probably wouldn't work if you repeated it, or tried to do it consciously? ---- @@ -97,7 +97,7 @@ I asked my boss to temporarily assign me some easier tasks that I could make ste My intent of a break from the religious war didn't take. I met with Anna on the UC Berkeley campus and read her excerpts from Ben's and Jessica's emails. (She had not provided a comment on "... Boundaries?" despite my requests, including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; spamming people with hysterical and somewhat demanding postcards felt more distinctive than my usual habit of spamming people with hysterical and somewhat demanding emails.) While we (my posse) were aghast at Yudkowsky's behavior, Anna was aghast at ours: reaching out to try to have a conversation with Yudkowsky, then concluding that he was a fraud because we weren't satisfied with the outcome was like hiding soldiers in an ambulance, introducing a threat into a context where he had a right to be safe. -I complained that I had believed our own [marketing](https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-and-create-the-art) [material](https://www.lesswrong.com/posts/jP583FwKepjiWbeoQ/epistle-to-the-new-york-less-wrongians) about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about "the thing" was never actually part of the original vision. She kept repeating that she had tried to warn me that public reason didn't work, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture and then not say them, in order to avoid getting drawn into pointless conflicts.) +I complained that I had believed our own [marketing](https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-and-create-the-art) [material](https://www.lesswrong.com/posts/jP583FwKepjiWbeoQ/epistle-to-the-new-york-less-wrongians) about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about "the thing" was never actually part of the original vision. She kept repeating that she had tried to warn me, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture and then not say them, in order to avoid getting drawn into pointless conflicts.) It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) in our own tiny little bubble of the world? @@ -111,7 +111,7 @@ I added: Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) offers from me anymore; previously, she had regarded my occasional custom of recklessly throwing money at friends to get what I wanted as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)) and it shouldn't be surprising if people steered clear of controversy. -I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't correct the mistake even after it was pointed out, then the "rationalists" were worse than useless to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I were part of [a reference class that included AI researchers](/2017/Jan/from-what-ive-tasted-of-desire/)). +I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit an epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, and they couldn't correct the mistake even after it was pointed out, then the "rationalists" were worse than useless to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I were part of [a reference class that included AI researchers](/2017/Jan/from-what-ive-tasted-of-desire/)). Fundamentally, I was skeptical that you _could_ do consistently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna was unusually skillful at thinking things without saying them; I thought people facing similar speech restrictions generally just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's less risky to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on your 501(c)(3) organization's canonical reading list.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). @@ -161,7 +161,7 @@ I got into a scuffle with Ruby Bloom on his post on ["Causal Reality _vs_. Socia [^collaborative-truth-seeking]: [No one ever seems to be able to explain to me what this phrase means.](https://www.lesswrong.com/posts/uvqd3YiBcrPxXzxQM/what-does-the-word-collaborative-mean-in-the-phrase) -(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the _Less Wrong_ FAQ. Maybe he couldn't let me "win" again so quickly?) +(My previous interaction with Ruby had been my challenge to "... Not Man for the Categories" appearing on the _Less Wrong_ FAQ. Maybe he couldn't let me win again so quickly?) I emailed the coordination group about the thread, on the grounds that gauging the psychology of the mod team was relevant to upcoming [Voice _vs._ Exit](https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty) choices. Meanwhile on _Less Wrong_, Ruby kept doubling down: @@ -171,7 +171,7 @@ I emailed the coordination group about the thread, on the grounds that gauging t > > Even if you wish to express that someone is wrong, I think this is done more effectively if one simultaneously continues to implicitly express "I think there is still some prior that you are correct and I curious to hear your thoughts", or failing that "You are very clearly wrong here yet I still respect you as a thinker who is worth my time to discourse with." [...] There's an icky thing here I feel like for there to be productive and healthy discussion you have to act as though at least one of the above statements is true, even if it isn't. -"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna (Subject: "uh, guys???", 20 July 2019) that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR _for_? +"Wow, he's really overtly arguing that people should lie to him to protect his feelings," Ben commented via email. I would later complain to Anna that Ruby's profile said he was one of two people to have volunteered for CfAR on three continents. If this was the level of performance we could expect from veteran CfAR participants, what was CfAR _for_? [I replied to Ruby that](https://www.lesswrong.com/posts/xqAnKW46FqzPLnGmH/causal-reality-vs-social-reality?commentId=v3zh3KhKNTdMXWkJH) you could just directly respond to your interlocutor's arguments. Whether you respect them as a thinker is off-topic. "You said X, but this is wrong because of Y" isn't a personal attack! -- 2.17.1