From d302f3bcabdfa2b5e3eccc7d0b86ba464ed9b618 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sun, 23 Oct 2022 14:23:34 -0700 Subject: [PATCH] =?utf8?q?memoir:=20Christmas=20party=202019=20(to=20?= =?utf8?q?=C2=A7=20end)?= MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit I'm going to want to edit/rewrite part of this to better explain some of the Vassarite inside baseball and tie it off with a zinger, but I think this is good enough for a first pass? --- ...-hill-of-validity-in-defense-of-meaning.md | 46 +++++++++++++++---- notes/a-hill-of-validity-sections.md | 16 ++++++- 2 files changed, 51 insertions(+), 11 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 224eece..a400b5f 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -726,7 +726,7 @@ I felt like—we were in a coal-mine, and my favorite one of our canaries just d And I was like, I agree that I was unreasonably emotionally attached to that particular bird, which is the direct cause of why I-in-particular am freaking out, but that's not why I expect _you_ to care. The problem is not the dead bird; the problem is what the bird is _evidence_ of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claim to have spotted their own dead canaries. I feel like the old-timer Rationality Elders should be able to get on the same page about the canary-count issue? -Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [actually turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. +Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/), and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [actually turned out to super-relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. [TODO: * I had posted a linkpost to "No, it's not The Incentives—it's You", which generated a lot of discussion, and Jessica (17 June) identified Ray's comments as the last straw. @@ -926,22 +926,48 @@ Scott messaged back the next morning, Christmas Day. He explained that the thoug I explained that the reason I accused him of being motivatedly dumb was that I _knew_ he knew about strategic equivocation, because he taught everyone else about it (as in his famous posts about [the motte-and-bailey doctrine](https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/), or [the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world)). And so when he acted like he didn't get it when I pointed out that this also applied to "trans women are women", that just seemed _implausible_. -He asked for a specific example. ("Trans women are women, therefore trans women have uteruses," being a bad example, because no one was claiming that.) I quoted [an article from the nationally prominent progressive magazine _The Nation_](https://www.thenation.com/article/trans-runner-daily-caller-terry-miller-andraya-yearwood-martina-navratilova/): "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn't acknowledge that trans women are in fact women." Scott agreed that +He asked for a specific example. ("Trans women are women, therefore trans women have uteruses," being a bad example, because no one was claiming that.) I quoted [an article from the nationally prominent progressive magazine _The Nation_](https://www.thenation.com/article/trans-runner-daily-caller-terry-miller-andraya-yearwood-martina-navratilova/): "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn't acknowledge that trans women are in fact women." Scott agreed that this was stupid and wrong and a natural consequence of letting people use language the way he was suggesting (!). -[TODO: I got to the party and people were doing a read-aloud of the "Hero Licensing" dialogue _Inadequate Equilibria_ (with Yudkowsky himself playing the Mysterious Stranger)] +I didn't think it was fair to ordinary people to expect them to go as deep into the philosophy-of-language weeds as _I_ could before being allowed to object to these kinds of Shenanigans. I thought "pragmatic" reasons to not just use the natural clustering that you would get by impartially running the clustering algorithm on the subspace of configuration space relevant to your goals, basically amounted to "wireheading" (optimizing someone's map for looking good rather than reflecting the territory) and "war" (optimizing someone's map to not reflect the territory, in order to gain an advantage over them). If I were to transition today and didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading me: making me think my transition was successful, even though it actually wasn't. That's ... not actually a nice thing to do to a rationalist. -[TODO: Scott and I retreated to the attic and continued our discussion] +Scott thought that trans people had some weird thing going on in their brain such that it being referred to as their natal sex was intrinsically painful, like an electric shock. The thing wasn't an agent, so the [injunction to refuse to give in to extortion](/2018/Jan/dont-negotiate-with-terrorist-memeplexes/) didn't apply. Having to use a word other than the one you would normally use in order to not subject someone to painful electric shocks was worth it. -[TODO: people reading funny GPT-2 quotes, and I got to show off my knowledge of the infomation] +I claimed that I knew things about the etiology of transness such that I didn't think the electric shock was inevitable, but I didn't want the conversation to go there if it didn't have to, because I didn't have to ragequit the so-called "rationalist" community over a complicated empirical thing; I only had to ragequit over bad philosophy. -A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context. +Scott said he might agree with me if he thought the world-model-clarity _vs._ utilitarian benefit tradeoff was unfavorable—or if he thought it had the chance of snowballing like in his "Kolmogorov Complicity and the Parable of Lighting". -[TODO: playing on a different chessboard] +... I pointed out that what sex people are is more relevant to human social life than whether lightning comes before thunder. He said that the problem in his parable was that people were being made ignorant of things, whereas in the transgender case, no one was being kept ignorant; their thoughts were just following a longer path. -[TODO: feeling much less ragequitty about the rationalists after the party -to ragequit, the elephant in my brain was able to extort more bandwidth out of Scott +I had technical reasons to be very skeptical of the claim that no one was "really" being kept ignorant. If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can still think, internally, and express yourself as best you can in Newspeak. But a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs ... doesn't seem like a nice place to live, right? Doesn't seem like a culture that can solve AI alignment, right? -[TODO: Ben on me not being on the side of clarity] +I linked to Zvi Mowshowitz's post about how [the claim that "everybody knows" something](https://thezvi.wordpress.com/2019/07/02/everybody-knows/) gets used an excuse to silence people trying to point out the thing (because they don't see people behaving as if it were common knowledge): "'Everybody knows' our kind of trans women are sampled from the male multivariate distribution rather than the female multivariate distribution, why are you being a jerk and pointing this out?" But I didn't think that everyone knew. I thought the people who sort-of knew were being intimidated into doublethinking around it. I thought this was bad for clarity. + +At this point, Scott mentioned that he wanted to go to the Event Horizon Christmas party, and asked if I wanted to come and continue the discussion there. I assented, and thanked him for his time; it would be really exciting if we could avoid a rationalist civil war. (I thought my "you need accurate models before you can do utilitarianism" philosophy was also near the root of Ben's objections to the EA movement.) + +When I arrived at the party, people were doing a reading of the "Hero Licensing" dialogue from _Inadequate Equilibria_. Yudkowsky himself was, playing the part of the Mysterious Stranger in the dialogue. At some point, Scott and I retreated upstairs to continue our discussion. By the end of it, I was at least feeling more assured of Scott's sincerity (rather than him being coerced into not saying anything incriminating over email). Scott said he would edit in a disclaimer note at the end of "... Not Man for the Categories". + +If I also got the chance to talk to Yudkowsky for a few minutes, I don't think I would be allowed to recount any details of that here due to the privacy rules I'm following in this document. + +The rest of the party was nice. People were reading funny GPT-2 quotes from their phones. At one point, conversation happened to zag in a way that let me show off the probability fact I had learned during Math and Wellness Month. A MIRI researcher sympathetically told me that it would be sad if I had to leave the Bay Area, which I thought was nice. There was nothing about the immediate conversational context to suggest that I might have to leave the Bay, but I guess by this point, my existence had become a context. + +All in all, I was feeling less ragequitty about the rationalists[^no-scare-quotes] after the party—as if by credibly _threatening_ to ragequit, the elephant in my brain had managed to extort more bandwidth from our leadership. The note Scott added to the end of "... Not Man for the Categories" still betrayed some philosophical confusion, but I now felt hopeful about addressing that in a future blog post explaining my thesis that unnatural category boundaries were for "wireheading" or "war", rather than assuming that anyone who didn't get the point from "... Boundaries?" was lying or retarded. + +[^no-scare-quotes]: Enough to not even scare-quote the term here. + +It was around this time that someone told me that I wasn't adequately taking into account that Yudkowsky was "playing on a different chessboard" than me. (A public figure focused on reducing existential risk from artificial general intelligence, is going to sense different trade-offs around Kolmogorov complicity strategies, than an ordinary programmer or mere worm focused on _things that don't matter_.) No doubt. But at the same time, I thought Yudkowsky wasn't adequately taking into account the extent to which some of his longtime supporters (like Michael or Jessica) were, or had been, counting on him to uphold certain standards of discourse (rather than chess). + +Another effect of my feeling better after the party was that my motivation to keep working on my memoir of the Category War vanished—as if I was still putting weight on a [zero-sum frame](https://unstableontology.com/2019/09/10/truth-telling-is-aggression-in-zero-sum-frames/) in which the memoir was a nuke that I only wanted to use as an absolute last resort. + +Ben wrote: + +> It seems to that according to Zack's own account, even writing the memoir _privately_ feels like an act of war that he'd rather avoid, not just using his own territory as he sees fit to create _internal_ clarity around a thing. +> +> I think this has to mean _either_ +> (a) that Zack isn't on the side of clarity except pragmatically where that helps him get his particular story around gender and rationalism validated +> _or_ +> (b) that Zack has ceded the territory of the interior of his own mind to the forces of anticlarity, not for reasons, but just because he's let the anticlaritarians dominate his frame. + +Or, I pointed out, (c) I had ceded the territory of the interior of my own mind _to Eliezer Yudkowsky in particular_, and while I had made a lot of progress unwinding this, I was still, still not done, and seeing him at the Newtonmas party set me back a bit. ------- diff --git a/notes/a-hill-of-validity-sections.md b/notes/a-hill-of-validity-sections.md index 967050b..f5452b3 100644 --- a/notes/a-hill-of-validity-sections.md +++ b/notes/a-hill-of-validity-sections.md @@ -1,5 +1,5 @@ blocky sections corresponding to an "event"— -_ Christmas party 2019 (do this today, 22 October?) +_ Christmas party 2019 _ wireheading his fiction subreddit _ Sasha disaster (dedicate a day?) @@ -7,6 +7,10 @@ _ the dolphin war (dedicate a day?) With internet available— +_ Newtonmas +_ link "Untitled" for Scott's anti-feminism +_ my mother on "dolphins are intelligent" +_ link to MIRIcult archive _ italics in _The Nation_ article quote? _ link "Hero Licensing" chapter of Inadequate Equilibria _ "not taking into account considerations" → rephrase to quote "God's dictionary" @@ -45,6 +49,9 @@ _ dath ilan conspiracy references far editing tier— +_ rewrite end of Christmas 2019 § with "optimistic about not needing to finish it", and then, "(It does not have a happy ending.)" +_ post-Christmas conversation should do a better job of capturing the war, that Jessica thinks Scott is Bad for being a psychiatrist +_ conversation with Scott should include the point where I'm trying to do AI theory _ Anna "everyone knows" we don't have free speech 2 Mar 2019, self-centeredness about which global goods matter _ footnote to explain why I always include the year with the month even though it could be inferred from context _ make sure to quote Yudkowsky's LW moderation policy before calling back to it @@ -1526,3 +1533,10 @@ what's really weird is having a delusion, knowing it's a delusion, and _everyone you can't imagine contemporary Yudkowsky adhering to Crocker's rules (http://sl4.org/crocker.html) (If you are silent about your pain, _they'll kill you and say you enjoyed it_.) + +4 levels of intellectual conversation https://rationalconspiracy.com/2017/01/03/four-layers-of-intellectual-conversation/ + +If we _actually had_ magical sex change technology of the kind described in ["Changing Emotions"](https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions), no one would even consider clever philosophy arguments about how to redefine words: people who wanted to change sex would just do it, and everyone else would use the corresponding language, not as a favor, but because it straightforwardly described reality. + + +Scott said it sounded like I wasn't a 100% category absolutist, and that I would be willing to let a few tiny things through, and that our real difference is that he thought this gender thing was tiny enough to ignore, and I didn't. I thought his self-report of "tiny enough to ignore" was blatantly false: I predicted that his brain notices when trans women don't pass, and that this affected his probabilistic anticipations about them, decisions towards them, _&c._, and that when he finds out that a passing trans women is trans, then also affects his probabilistic anticipations, _&c._ This could be consistent with "tiny enough to ignore" if you draw the category boundaries of "tiny" and "ignore" the right way in order to force the sentence to come out "true" ... but you see the problem. If I took what Scott said in "... Not Man for the Categories" literally, I could make _any_ sentence true by driving a truck through the noncentral fallacy. -- 2.17.1