From: Zack M. Davis Date: Tue, 24 Oct 2023 02:52:09 +0000 (-0700) Subject: memoir: pt. 3 editing sweep ... X-Git-Url: http://unremediatedgender.space/source?a=commitdiff_plain;h=921105a0e09bbb23d096c8a5bf0e6ffb5567ea98;p=Ultimately_Untrue_Thought.git memoir: pt. 3 editing sweep ... --- diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 33f7020..8bbbd0c 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -17,7 +17,7 @@ Recapping our Whole Dumb Story so far: in a previous post, ["Sexual Dimorphism i —none of which gooey private psychological minutiæ would be in the public interest to blog about _except that_, as I explained in a subsequent post, ["Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer"](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/), around 2016, everyone in the community that formed around the Sequences suddenly decided that guys like me might actually be women in some unspecified metaphysical sense, and the cognitive dissonance from having to rebut all this nonsense coming from everyone I used to trust drove me [temporarily](/2017/Mar/fresh-princess/) [insane](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/) from stress and sleep deprivation ... -—which would have been the end of the story, _except that_, as I explained in a subsequent–subsequent post, ["A Hill of Validity in Defense of Meaning"](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/), in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that looked optimized to suggest that people who disputed that men could be women in some unspecified metaphysical sense were philosophically confused, and my unsuccessful attempts to get him to clarify led me and my allies to conclude that Yudkowsky and his "rationalists" were corrupt. +—which would have been the end of the story, _except that_, as I explained in a subsequent–subsequent post, ["A Hill of Validity in Defense of Meaning"](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/), in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that looked optimized to suggest that people who disputed that men could be women in some unspecified metaphysical sense were philosophically confused. Anyone else being wrong on the internet like that wouldn't have seemed like a big deal, but Scott Alexander had written that [rationalism is the belief that Eliezer Yudkowsky is the rightful caliph](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/). After extensive attempts by me and allies to get him to clarify amounted to nothing, we felt justified in concluding that Yudkowsky and his Caliphate of so-called "rationalists" was corrupt. Anyway, given that the "rationalists" were fake and that we needed something better, there remained the question of what to do about that, and how to relate to the old thing. @@ -25,7 +25,7 @@ I had been hyperfocused on prosecuting my Category War, but the reason Michael V [^posse-boundary]: Sarah Constantin and "Riley" had also been involved in reaching out to Yudkowsky, and were included in many subsequent discussions, but seemed like more marginal members of the group that was forming. -Ben had previously worked at GiveWell and had written a lot about problems with the effective altruism movement, in particular, EA-branded institutions making [incoherent](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [decisions](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) under the influence of incentives to [distort](http://benjaminrosshoffman.com/humility-argument-honesty/) [information](http://benjaminrosshoffman.com/honesty-and-perjury/) [in order to](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) [seek](http://benjaminrosshoffman.com/against-neglectedness/) [control](http://benjaminrosshoffman.com/against-responsibility/). Jessica had previously worked at MIRI, where she was unnerved by under-evidenced paranoia about secrecy and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), and would later [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards) her experiences there. To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? +Ben had previously worked at GiveWell and had written a lot about problems with the effective altruism movement, in particular, EA-branded institutions making [incoherent](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [decisions](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) under the influence of incentives to [distort](http://benjaminrosshoffman.com/humility-argument-honesty/) [information](http://benjaminrosshoffman.com/honesty-and-perjury/) [in order to](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) [seek](http://benjaminrosshoffman.com/against-neglectedness/) [control](http://benjaminrosshoffman.com/against-responsibility/). Jessica had previously worked at MIRI, where she was unnerved by under-evidenced paranoia about secrecy and [short AI timelines](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), and would later [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards) her experiences there. To what extent were my gender and categories thing, and Ben's EA thing, and Jessica's MIRI thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit? If there was a real problem, I didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But culture is the sum of lots and lots of little micro-actions by lots and lots of people. If your entire culture has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better? It's not as simple as shouting, "Hey guys, Truth matters!" Any ideologue or religious person would agree with _that_. It's not feasible to litigate every petty epistemic crime in something someone said, and if you tried, someone who thought the culture was basically on track could accuse you of cherry-picking. If "culture" is a real thing at all—and it certainly seems to be—we are condemned to grasp it unclearly, relying on the brain's pattern-matching faculties to sum over thousands of little micro-actions as a [_gestalt_](https://en.wiktionary.org/wiki/gestalt), rather than having the kind of robust, precise representation a well-designed AI could compute plans with. @@ -59,35 +59,35 @@ I may have subconsciously pulled off an interesting political maneuver. In my fi And as it happened, on 4 May 2019, Yudkowsky [re-Tweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feelless aggrieved.) Was I wrong to interpet this as [another "concession" to me](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#proton-concession)? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.) -Separately, on 30 April 2019, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, [Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. This culminated in a group conversation with the entire house, which I found disturbing insofar as everyone else seemed to agree on things that I thought were clearly contrary to the spirit of the Sequences. +Separately, on 30 April 2019, I visited the house where "Meredith" and her husband Mike and Kelsey Piper and some other people lived, which I'll call "Arcadia".[^named-houses] I said, essentially, [Oh man oh jeez](https://www.youtube.com/watch?v=q_eMvgNrQQE), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_. This culminated in a group conversation with the entire house, which I found disturbing; everyone else seemed to agree on things that I thought were clearly contrary to the spirit of the Sequences. [^named-houses]: It was common practice in our subculture to name group houses. My apartment was "We'll Name It Later." In an adorable twist, Mike and "Meredith"'s two-year-old son was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael _Vassar_.[^mike-pseudonym] -[^mike-pseudonym]: I'm not giving Mike a pseudonym because his name is needed for this adorable anecdote to make sense, and this Whole Dumb Story isn't otherwise saying sensitive things about him. +[^mike-pseudonym]: I'm not giving Mike a pseudonym because his name is needed for this adorable anecdote to make sense, and I'm not otherwise saying sensitive things about him. -And as it happened, on 7 May 2019, Kelsey wrote [a Facebook comment displaying evidence of understanding my point](/images/piper-spending_social_capital_on_talking_about_trans_issues.png). +And as it happened, on 7 May 2019, Kelsey wrote [a Facebook comment displaying evidence of understanding my thesis](/images/piper-spending_social_capital_on_talking_about_trans_issues.png). -These two datapoints led me to a psychological hypothesis (which was maybe "obvious", but I hadn't thought about it before): when people see someone wavering between their coalition and a rival coalition, they're motivated to offer a few concessions to keep the wavering person on their side. Kelsey could _afford_ (_pace_ [Upton Sinclair](https://www.goodreads.com/quotes/21810-it-is-difficult-to-get-a-man-to-understand-something)) to not understand the thing about sex being a natural category when it was just me freaking out alone, but "got it" almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes) ... and maybe my "closing thoughts" email had a similar effect on Yudkowsky (assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later)?? This probably wouldn't work if you repeated it (or tried to do it consciously)? +These two datapoints led me to a psychological hypothesis: when people see someone of some value wavering between their coalition and a rival coalition, they're motivated to offer a few concessions to keep the wavering person on their side. Kelsey could afford (_cf._ [Upton Sinclair](https://www.goodreads.com/quotes/21810-it-is-difficult-to-get-a-man-to-understand-something)) to not understand the thing about sex being a natural category when it was just me freaking out alone, but "got it" almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes). Maybe my "closing thoughts" email had a similar effect on Yudkowsky, assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later? This probably wouldn't work if you repeated it, or tried to do it consciously? ---- -I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" memoir-post. Ben said that the target audience to aim for was sympathetic-but-naïve people like I was a few years ago, who hadn't yet had the experiences I had—so they wouldn't have to freak out to the point of being imprisoned and demand help from community leaders and not get it; they could just learn from me. +I started drafting a "why I've been upset for five months and have lost faith in the so-called 'rationalist' community" memoir-post. Ben said that the target audience to aim for was sympathetic-but-naïve people like I had been a few years ago, who hadn't yet had the experiences I had. This way, they wouldn't have to freak out to the point of [being imprisoned](/2017/Mar/fresh-princess/) and demand help from community leaders and not get it; they could just learn from me. I didn't know how to continue it. I was too psychologically constrained; I didn't know how to tell the Whole Dumb Story without (as I perceived it) escalating personal conflicts or leaking info from private conversations. I decided to take a break from the religious civil war [and from this blog](/2019/May/hiatus/), and [declared May 2019 as Math and Wellness Month](http://zackmdavis.net/blog/2019/05/may-is-math-and-wellness-month/). -My dayjob performance had been suffering terribly for months. The psychology of the workplace is ... subtle. There's a phenomenon where some people are _way_ more productive than others and everyone knows it, but no one is cruel enough [to make it _common_ knowledge](https://slatestarcodex.com/2015/10/15/it-was-you-who-made-my-blue-eyes-blue/), which is awkward for people who simultaneously benefit from the culture of common-knowledge-prevention allowing them to collect the status and money rents of being a $150K/yr software engineer without actually [performing at that level](http://zackmdavis.net/blog/2013/12/fortune/), while also having [read enough Ayn Rand as a teenager](/2017/Sep/neither-as-plea-nor-as-despair/) to be ideologically opposed to subsisting on unjustly-acquired rents rather than value creation. The "everyone knows I feel guilty about underperforming, so they don't punish me because I'm already doing enough internalized domination to punish myself" dynamic would be unsustainable if it were to evolve into a loop of "feeling gulit _in exchange for_ not doing work" rather than the intended "feeling guilt in order to successfully incentivize work". I didn't think they would actually fire me, but I was worried that they _should_. +My dayjob performance had been suffering terribly for months. The psychology of the workplace is ... subtle. There's a phenomenon where some people are vastly more productive than others and everyone knows it, but no one is cruel enough [to make it _common_ knowledge](https://slatestarcodex.com/2015/10/15/it-was-you-who-made-my-blue-eyes-blue/), which is awkward for people who simultaneously benefit from the culture of common-knowledge-prevention allowing them to collect the status and money rents of being a $150K/yr software engineer without actually [performing at that level](http://zackmdavis.net/blog/2013/12/fortune/), while also having [read enough Ayn Rand as a teenager](/2017/Sep/neither-as-plea-nor-as-despair/) to be ideologically opposed to subsisting on unjustly-acquired rents rather than value creation. The "everyone knows I feel guilty about underperforming, so they don't punish me because I'm already doing enough internalized domination to punish myself" dynamic would be unsustainable if it were to evolve into a loop of "feeling guilt in exchange for not doing work" rather than the intended "feeling guilt in order to successfully incentivize work". I didn't think the company would fire me, but I was worried that they _should_. -I asked my boss to temporarily take on some easier tasks that I could make steady progress on even while being psychologically impaired from a religious war. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired _anyway_, it was better to be upfront about how I could best serve the company given that impairment, rather than hoping that the boss wouldn't notice. +I asked my boss to temporarily temporarily assign me some easier tasks that I could make steady progress on even while being psychologically impaired from a religious war. (We had a lot of LaTeX templating of insurance policy amendments that needed to get done.) If I was going to be psychologically impaired _anyway_, it was better to be upfront about how I could best serve the company given that impairment, rather than hoping that the boss wouldn't notice. -My "intent" to take a break from the religious war didn't take. I met with Anna on the UC Berkeley campus, and read her excerpts from some of Ben's and Jessica's emails. (She had not acquiesced to my request for a comment on "... Boundaries?", including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; I had figured that spamming people with hysterical and somewhat demanding physical postcards was more polite (and funnier) than my usual habit of spamming people with hysterical and somewhat demanding emails.) While we (my posse) were aghast at Yudkowsky's behavior, she was aghast at ours: reaching out to try to have a conversation with Yudkowsky, and then concluding he was a fraud because we weren't satisfied with the outcome was like hiding soldiers in an ambulance, introducing a threat against Yudkowsky in context where he had a right to be safe. +My "intent" to take a break from the religious war didn't take. I met with Anna on the UC Berkeley campus, and read her excerpts from some of Ben's and Jessica's emails. (She had not acquiesced to my request for a comment on "... Boundaries?", including in the form of two paper postcards that I stayed up until 2 _a.m._ on 14 April 2019 writing; I had figured that spamming people with hysterical and somewhat demanding physical postcards was more polite than my usual habit of spamming people with hysterical and somewhat demanding emails.) While we (my posse) were aghast at Yudkowsky's behavior, she was aghast at ours: reaching out to try to have a conversation with Yudkowsky, and then concluding that he was a fraud because we weren't satisfied with the outcome was like hiding soldiers in an ambulance, introducing a threat against him in context where he had a right to be safe. -I complained that I had _actually believed_ our own marketing material about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about what "the thing" was, was never actually part of the original vision. She kept repeating that she had _tried_ to warn me in previous years that public reason didn't work, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture, and then don't say them.) +I complained that I had _actually believed_ our own [marketing](https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-and-create-the-art) [material](https://www.lesswrong.com/posts/jP583FwKepjiWbeoQ/epistle-to-the-new-york-less-wrongians) about the "rationalists" remaking the world by wielding a hidden Bayesian structure of Science and Reason that applies [outside the laboratory](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory). Was that all a lie? Were we not trying to do the thing anymore? Anna was dismissive: she thought that the idea I had gotten about what "the thing" was, was never actually part of the original vision. She kept repeating that she had tried to warn me that public reason didn't work, and I didn't listen. (Back in the late 'aughts, she had often recommended Paul Graham's essay ["What You Can't Say"](http://paulgraham.com/say.html) to people, summarizing Graham's moral that you should figure out the things you can't say in your culture, and then don't say them, in order to avoid getting drawn into pointless conflicts.) -It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed really fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the actually-smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) in our own tiny little bubble of the world? +It was true that she had tried to warn me for years, and (not yet having gotten over [my teenage ideological fever dream](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism)), I hadn't known how to listen. But this seemed fundamentally unresponsive to how _I_ kept repeating that I only expected consensus on the basic philosophy-of-language stuff (not my object-level special interest). Why was it so unrealistic to imagine that the actually-smart people could [enforce standards](https://srconstantin.github.io/2018/12/24/contrite-strategies.html) in our own tiny little bubble of the world? My frustration bubbled out into follow-up emails: @@ -97,47 +97,35 @@ I added: > Can you please _acknowledge that I didn't just make this up?_ Happy to pay you $200 for a reply to this email within the next 72 hours -Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) offers from me anymore; previously, she had regarded my custom of throwing money at people to get what I wanted as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of "A Sense That More Is Possible", "Raising the Sanity Waterline", _&c._, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)), that it shouldn't be surprising if people steered clear of controversy. +Anna said she didn't want to receive [cheerful price](https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) offers from me anymore; previously, she had regarded my custom of recklessly throwing money at people to get what I wanted as good-faith libertarianism between consenting adults, but now she was afraid that if she accepted, it would be portrayed in some future Ben Hoffman essay as an instance of her _using_ me. She agreed that someone could have gotten the ideals I had gotten out of those posts, but there was also evidence from that time pointing the other way (_e.g._, ["Politics Is the Mind-Killer"](https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer)), that it shouldn't be surprising if people steered clear of controversy. -I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that _whether or not I should cut my dick off_ would _become_ a political issue. That was _new evidence_ about whether the original vision was wise! I wasn't trying to do politics with my idiosyncratic special interest; I was trying to _think seriously_ about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake even after it was pointed out, then the 2019-era "rationalists" were _worse than useless_ to me personally. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of a reference class that includes AI researchers). +I replied: but when forming the original let's-be-apolitical vision in 2008, we did not anticipate that whether or not I should cut my dick off would _become_ a political issue. That was new evidence about whether the original vision was wise! I wasn't particularly trying to do politics with my idiosyncratic special interest; I was trying to think seriously about the most important thing in my life and only do the minimum amount of politics necessary to protect my ability to think. If 2019-era "rationalists" were going to commit a trivial epistemology mistake that interfered with my ability to think seriously about the most important thing in my life, but couldn't correct the mistake even after it was pointed out, then the "rationalists" were _worse than useless_ to me. This probably didn't matter causally (I wasn't an AI researcher, therefore I didn't matter), but it might matter timelessly (if I was part of a reference class that included AI researchers). -Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's easier to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on a canonical reading list, for obvious reasons.) You _can't_ optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you _can't_ optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). +Fundamentally, I was skeptical that you _could_ do consisently high-grade reasoning as a group without committing heresy, because of the mechanism that Yudkowsky had described in ["Entangled Truths, Contagious Lies"](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies) and ["Dark Side Epistemology"](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology): the need to lie about lying and cover up cover-ups propagates recursively. Anna in particular was unusually skillful at thinking things without saying them; I thought most people facing similar speech restrictions just get worse at thinking (plausibly[^plausibly] including Yudkowsky), and the problem gets worse as the group effort scales. (It's easier to recommend ["What You Can't Say"](http://www.paulgraham.com/say.html) to your housemates than to put it on a canonical reading list, for obvious reasons.) You can't optimize your group's culture for not-talking-about-atheism without also optimizing against understanding [Occam's razor](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor); you can't optimize for not questioning gender self-identity without also optimizing against understanding the [37 ways that words can be wrong](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong). -[^plausibly]: Today I would say _obviously_, but at this point, I was still deep enough in my hero-worship that I wrote "plausibly". +[^plausibly]: I was still deep enough in my hero-worship that I wrote "plausibly". Today, I would not consider the adverb necessary. Despite Math and Wellness Month and my "intent" to take a break from the religious civil war, I kept reading _Less Wrong_ during May 2019, and ended up scoring a couple of victories in the civil war (at some cost to Wellness). MIRI researcher Scott Garrabrant wrote a post about how ["Yes Requires the Possibility of No"](https://www.lesswrong.com/posts/G5TwJ9BGxcgh5DsmQ/yes-requires-the-possibility-of-no). Information-theoretically, a signal sent with probability one transmits no information: you can only learn something from hearing a "Yes" if believed that the answer could have been "No". I saw an analogy to my philosophy-of-language thesis, and mentioned it in a comment: if you want to believe that _x_ belongs to category _C_, you might try redefining _C_ in order to make the question "Is _x_ a _C_?" come out "Yes", but you can only do so at the expense of making _C_ less useful. Meaningful category-membership (Yes) requires the possibility of non-membership (No). -MIRI research associate Vanessa Kosoy [commented](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=FxSZwECjhgYE7p2du): +MIRI research associate Vanessa Kosoy [objected that](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=FxSZwECjhgYE7p2du) she found it "unpleasant that [I] always bring [my] hobbyhorse in, but in an 'abstract' way that doesn't allow discussing the actual object level question"; it made her feel "attacked in a way that allow[ed] for no legal recourse to defend [herself]." I [replied](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=32GPaijsSwX2NSFJi) that that was understandable, but that I hoped it was also understandable that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized, such that my attempts to do correct epistemology were perceived as attacking people. -> I find it unpleasant that you always bring your hobbyhorse in, but in an "abstract" way that doesn't allow discussing the actual object level question. It makes me feel attacked in a way that allows for no legal recourse to defend myself. +The ensuring trainwreck got so bad that the mods manually [moved the comments to their own post](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019). Based on the karma scores and what was said,[^yes-requires-slapfight-highlights] I count it as a "victory" for me. -I [replied](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=32GPaijsSwX2NSFJi) that that was understandable, but that I hoped it was also understandable that I found it unpleasant that our standard Bayesian philosophy of language somehow got politicized (!?), such that my attempts to do _correct epistemology_ were perceived as attacking people. Imagine living in a world where [posts about the minimum description length principle](https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length) were perceived as an attack on Christians—or if that analogy seemed loaded (because our subculture pattern matches atheism as "the good guys"), imagine some racist getting _really interested_ in the statistics of the normal distribution, and posting about the ratio of areas in the right tails of normal distributions with different means. I could see how that would be annoying—maybe even threatening—which would make it all the more satisfying if you could find a _mistake_ in the bastard's math. But if you _couldn't_ find a mistake—if, in fact, the post is on-topic for the forum and correct in the literal things that it literally says, then complaining about the author's motive for being interested in the normal distribution wouldn't seem like an obviously positive contribution to the discourse? I saw the problem, of course, and didn't mean to play dumb about it. But what, realistically, did Kosoy expect the atheist—or the racist, or me—to do? +[^yes-requires-slapfight-highlights]: I particularly appreciated Said Achmiz's [defense of disregarding community members' feelings](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=EsSdLMrFcCpSvr3pG), and [Ben's commentary on speech acts that lower the message length of proposals to attack some group](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=TXbgr7goFtSAZEvZb). -In a subthread in which I contested Kosoy's characterization of me as a "voice with an agenda which, if implemented, would put [her] in physical danger" ("I don't think of myself as having a lot of strong political beliefs," I said, "but I'm going to take a definite stand here: I am _against_ people being in physical danger"), Ben [pointed out that](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=TXbgr7goFtSAZEvZb) +On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The moderator who wrote the draft [was persuaded](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." -> Some speech acts lower the message length of proposals to attack some groups, or raise the message length of attempts to prevent such attacks. This is a kind of meta-attack or threat, like concentrating troops on a country's border. +But winning "victories" wasn't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" onto my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what one's real work was. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software." -Norms discouraging "political" speech could aggravate the problem, if the response looked "political" but the original threat didn't. If Kosoy wanted to put in the work to explain why my philosophy of language blogging was causing problems for her, she would face legitimate doubt whether her defensive measures would be "admissible". +I said, I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a goddamned _MIRI research associate_? Not to demonize Kosoy, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could beat it out of me. -The trainwreck got so bad that the mods manually [moved the comments to their own post](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019). Based on the karma scores and what was said (Said Achmiz gave [a particularly helpful defense of disregarding community members' feelings](https://www.lesswrong.com/posts/WwTPSkNwC89g3Afnd/comment-section-from-05-19-2019?commentId=EsSdLMrFcCpSvr3pG)), I count it as a "victory" for me. +Steven objected that tractibility and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's graviational field directly impedes NASA's mession, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort trying to reduce the Earth's gravity (_viz._, zero). -On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to "... Not Man for the Categories" as one of Scott Alexander's best essays. I argued that it would be better to cite _almost literally_ any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: _either_ Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, _or_ one had no grounds to criticize me for calling it a blatant lie, because there's no rule of rationality that says I shouldn't draw the category boundaries of "blatant lie" that way. The mod [was persuaded on reflection](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for?commentId=oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." +I agreed that tractability needs to be addressed, but the situation felt analogous to being in a coal mine in which my favorite one of our canaries had just died. Caliphate officials (Yudkowsky, Alexander, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, which was the direct cause of why I-in-particular was freaking out, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue? -But winning "victories" wasn't particularly comforting when I resented this becoming a political slapfight at all. I thought a lot of the objections I (and my "allies") faced in the derailed "Possibility of No" thread were insane. - -I wrote to Anna and Steven Kaas (who I was trying to "recruit" onto our side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what _is_ one's real work. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except for this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason, should probably be doing _more_ fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improving our collective ability to reason", in a way that they don't hurt the mission of "make a lot of money writing software." - -I said, I didn't know if either of them had caught the recent trainwreck on _Less Wrong_, but wasn't it _terrifying_ that the person who objected was a goddamned _MIRI research associate_? Not to demonize Kosoy, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could _beat it out of me_. - -Steven objected that tractibility and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's graviational field directly impedes NASA's mession, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort (_viz._, zero) trying to reduce the Earth's gravity. - -I agreed that tractability needs to be addressed, but I felt like—we were in a coal mine, and my favorite one of our canaries just died, and I was freaking out about this, and represenatives of the Caliphate (Yudkowsky, Alexander, Anna, Steven) were like, Sorry, I know you were really attached to that canary, but it's just a bird; you'll get over it; it's not really that important to the coal-mining mission. - -And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which was the direct cause of why I-in-particular was freaking out, but that's not why I expected _them_ to care. The problem was not the dead bird; the problem was what the bird was _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue? - -Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. +Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to deeply relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. In June 2019, I made [a linkpost on _Less Wrong_](https://www.lesswrong.com/posts/5nH5Qtax9ae8CQjZ9/tal-yarkoni-no-it-s-not-the-incentives-it-s-you) to Tal Yarkoni's ["No, It's Not The Incentives—It's you"](https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/), about how professional scientists should stop using career incentives as an excuse for doing poor science. It generated a lot of discussion. @@ -209,7 +197,7 @@ On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist B Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent popularity of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project. -(Remember, this was 2019. After seeing what GPT-3/PaLM/DALL-E/_&c._ could do during the "long May 2020", it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for.) +(Remember, this was 2019. After seeing what GPT-3, [DALL-E](https://openai.com/research/dall-e), [PaLM](https://arxiv.org/abs/2204.02311), _&c._ could do during the "long May 2020", it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for.) I still sympathized with the "mainstream" pushback against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a _boring_ semantic argument, but I feared that until we invented better linguistic technology, the _boring_ semantic argument was going to _continue_ sucking up discussion bandwidth with others when it didn't need to. @@ -295,23 +283,23 @@ I had had [similar](/2022/Apr/student-dysphoria-and-a-previous-lifes-war/) [prob Empirically, not right! But I had to resolve the contradiction between narrative and reality somehow, and if my choices were "People are [mistakenly](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) failing to live up to the narrative" and "[Everybody knows](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) the narrative is a lie; it would be crazy to expect people to live up to it", the former had been more appealing. -It was the same thing here. Kelsey said that it was completely predictable that Yudkowsky wouldn't make a public statement, even one as uncontroversial as "category boundaries should be drawn for epistemic and not instrumental reasons", because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the group discussion on 30 April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality. +It was the same thing here. Kelsey said that it was completely predictable that Yudkowsky wouldn't make a public statement, even one as uncontroversial as "category boundaries should be drawn for epistemic and not instrumental reasons", because his experience of public statements was that they'd be taken out of context and used against MIRI by the likes of /r/SneerClub. This wasn't an update at all. (Everyone at "Arcadia" had agreed, in the house discussion on 30 April.) Vassar's insistence that Eliezer be expected to do something that he obviously was never going to do had caused me to be confused and surprised by reality.[^statement] + +[^statement]: I thought it was odd that Kelsey seemed to think the issue was that me and my allies were pressuring Yudkowsky to make a public statement, which he never does. From our perspective, the issue was that he _had_ made a statement, and it was wrong. -Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not optimized to respond to the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But that being the case, I had no reason to care what Eliezer Yudkowsky says, because not-provoking-SneerClub isn't truth tracking, and careful arguments are. This was a huge surprise _to me_, even if Kelsey knew better. +Kelsey seemed to be taking it as obvious that Eliezer Yudkowsky's public behavior was optimized to respond to the possibility of political attacks by people who hate him anyway, and not optimized to respond to the actuality of thousands of words of careful arguments appealing to his own writings from ten years ago. Very well. Maybe it _was_ obvious. But that being the case, I had no reason to care what Eliezer Yudkowsky says, because not-provoking-SneerClub isn't truth-tracking, and careful arguments are. This was a huge surprise _to me_, even if Kelsey knew better. What Kelsey saw as "Zack is losing his ability to model other people and I'm worried about him", I thought Ben and Jessica would see as "Zack is angry about living in [simulacrum level 3](http://benjaminrosshoffman.com/excerpts-from-a-larger-discussion-about-simulacra/) and we're worried about _everyone else_." I did think that Kelsey was mistaken about how much causality to attribute to Michael's influence, rather than me already being socially retarded. From my perspective, validation from Michael was merely the catalyst that excited me from confused-and-sad to confused-and-socially-aggressive-about-it. The social-aggression phase revealed a lot of information—not just to me. Now I was ready to be less confused—after I was done grieving. -------- - -Later, talking in person at "Arcadia", Kelsey told me that someone (whose identity she would not disclose) had threatened to sue over the report about Michael, so REACH was delaying its release for the one-year statute of limitations. As far as my interest in defending Michael went, I counted this as short-term good news (because the report wasn't being published) but longer-term bad news (because the report must be a hit piece if Michael's mysterious ally was trying to hush it). +Later, talking in person at "Arcadia", Kelsey told me that someone whose identity she would not disclose had threatened to sue over the report about Michael, so REACH was delaying its release for the one-year statute of limitations. As far as my interest in defending Michael went, I counted this as short-term good news (because the report wasn't being published) but longer-term bad news (because the report must be a hit piece if Michael's mysterious ally was trying to hush it). When I mentioned this to Michael on Signal on 3 August 2019, he replied: > The person is me, the whole process is a hit piece, literally, the investigation process and not the content. Happy to share the latter with you. You can talk with Ben about appropiate ethical standards. -In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. I count this kind of situation as another reason to be [annoyed at how norms protecting confidentiality](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#privacy-constraints) distort information; Kelsey apparently felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person (because I naïvely assumed that if Michael had threatened to sue, Kelsey would have said that). I can't say I never introduce this kind of disortion in my communications (for I, too, am bound by norms), but when I do, I feel dirty about it. +In retrospect, I feel dumb for not guessing that Michael's mysterious ally was Michael himself. I count this kind of situation as another reason to be [annoyed at how norms protecting confidentiality](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#privacy-constraints) distort information; Kelsey apparently felt obligated to obfuscate any names connected to potential litigation, which led me to the infer the existence of a nonexistent person (because I naïvely assumed that if Michael had been the person who threatened to sue, Kelsey would have said that). I can't say I never introduce this kind of disortion myself (for I, too, am bound by norms), but when I do, I feel dirty about it. As far as appropriate ethical standards go, I didn't particularly approve of silencing critics with lawsuit threats, even while I agreed with Michael that "the process is the punishment." I imagine that if the REACH wanted to publish a report about me, I would expect to defend myself in public, having faith that Speech would carry the day against a corrupt community center—or for that matter, against /r/SneerClub. diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 22fda45..7ab4411 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -9,7 +9,7 @@ _ the hill he wants to die on (conclusion for "Zevi's Choice"??) _ Tail vs. Bailey / Davis vs. Yudkowsky analogy (new block somewhere) _ mention that "Not Man for the Categories" keeps getting cited -first edit pass bookmark: "In an adorable twist" +first edit pass bookmark: "In June 2019, I made" pt. 3 edit tier— ✓ fullname Taylor and Hoffman at start of pt. 3 @@ -19,12 +19,9 @@ _ Ben on "locally coherent coordination": use direct quotes for Ben's language _ ask Sarah about context for "EA Has a Lying Problem"? ✓ set context for Anna on first mention in the post ✓ more specific on "mostly pretty horrifying" and group conversation with the whole house -_ paragraph to explain the cheerful price bit -_ cut words from the "Yes Requires" slapfight? -_ better introduction of Steven Kaas +✓ cut words from the "Yes Requires" slapfight? _ "Not the Incentives"—rewrite given that I'm not shielding Ray _ cut many words from "Social Reality" scuffle -_ is "long May 2020" link still good? _ better context on "scam" &c. earlier _ meeting with Ray _ Ben's "financial fraud don't inquire as to the conscious motives of the perp" claim may be false @@ -47,7 +44,7 @@ _ footnote explaining quibbles? (the first time I tried to write this, I hesitat _ "it was the same thing here"—most readers are not going to understand what I see as the obvious analogy _ first mention of Jack G. should introduce him properly _ link to protest flyer -_ weird that Kelsey thought the issue was that we were trying to get Yudkowsky to make a statement, which he never does; from our perspective, the issue was that he _had_ made a statement, and it was wrong +✓ weird that Kelsey thought the issue was that we were trying to get Yudkowsky to make a statement pt. 4 edit tier— _ mention Nick Bostrom email scandal (and his not appearing on the one-sentence CAIS statement)