From 544ebe0fcfd95aa1b081cbb3fadfcc4908e7ad79 Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Fri, 14 Jul 2023 18:49:18 -0700 Subject: [PATCH] check in --- .../a-hill-of-validity-in-defense-of-meaning.md | 14 +++++++------- notes/memoir-sections.md | 12 ++++++------ notes/notes.txt | 3 +++ 3 files changed, 16 insertions(+), 13 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 617197a..c173c48 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -1,6 +1,6 @@ Title: A Hill of Validity in Defense of Meaning Author: Zack M. Davis -Date: 2023-07-01 11:00 +Date: 2023-07-15 11:00 Category: commentary Tags: autogynephilia, bullet-biting, cathartic, categorization, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors Status: draft @@ -143,7 +143,7 @@ Like many gender-dysphoric males, I [cosplay](/2016/Dec/joined/) [female](/2017/ By "traits" I mean not just sex chromosomes ([as Yudkowsky suggested on Twitter](https://twitter.com/ESYudkowsky/status/1067291243728650243)), but the conjunction of dozens or hundreds of measurements that are [causally downstream of sex chromosomes](/2021/Sep/link-blood-is-thicker-than-water/): reproductive organs and muscle mass (again, sex difference effect size of [Cohen's _d_](https://en.wikipedia.org/wiki/Effect_size#Cohen's_d) ≈ 2.6) and Big Five Agreeableness (_d_ ≈ 0.5) and Big Five Neuroticism (_d_ ≈ 0.4) and short-term memory (_d_ ≈ 0.2, favoring women) and white-gray-matter ratios in the brain and probable socialization history and [any number of other things](/papers/archer-the_reality_and_evolutionary_significance_of_human_psychological_sex_differences.pdf)—including differences we might not know about, but have prior reasons to suspect exist. No one _knew_ about sex chromosomes before 1905, but given the systematic differences between women and men, it would have been reasonable to suspect the existence of some sort of molecular mechanism of sex determination. -Forcing a speaker to say "trans woman" instead of "man" in a sentence about my cosplay photos depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. It's understood, "openly and explicitly and with public focus on the language and its meaning", what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example. But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "man" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure. ("Trans women", two words, are presumably a subcluster within the "women" cluster.) Crowing in the public square about how people who object to being forced to "lie" must be ontologically confused is ignoring the interesting part of the problem. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). +Forcing a speaker to say "trans woman" instead of "man" in a sentence about my cosplay photos depending on my verbally self-reported self-identity may not be forcing them to _lie_, exactly. It's understood, "openly and explicitly and with public focus on the language and its meaning," what _trans women_ are; no one is making a false-to-fact claim about them having ovaries, for example. But it _is_ forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "man" [cluster in configuration space](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace)), and instead use language that suggests a different cluster-structure. ("Trans women", two words, are presumably a subcluster within the "women" cluster.) Crowing in the public square about how people who object to being forced to "lie" must be ontologically confused is ignoring the interesting part of the problem. Gender identity's [claim to be non-disprovable](https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable) functions as a way to [avoid the belief's real weak points](https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points). To this, one might reply that I'm giving too much credit to the "anti-trans" faction for how stupid they're not being: that _my_ careful dissection of the hidden probabilistic inferences implied by words [(including pronoun choices)](/2022/Mar/challenges-to-yudkowskys-pronoun-reform-proposal/) is all well and good, but calling pronouns "lies" is not something you do when you know how to use words. @@ -321,7 +321,7 @@ Ben said he was more worried that saying politically loaded things in the wrong There's a view that assumes that as long as everyone is being cordial, our truthseeking public discussion must be basically on track; the discussion is only being warped by the fear of heresy if someone is overtly calling to burn the heretics. -I do not hold this view. I think there's a _subtler_ failure mode where people know what the politically favored [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) is, and collude to ignore, nitpick, or just be _uninterested_ in any fact or line of argument that doesn't fit. I want to distinguish between direct ideological conformity enforcement attempts, and people not living up to their usual epistemic standards in response to ideological conformity enforcement. +I do not hold this view. I think there's a subtler failure mode where people know what the politically favored [bottom line](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) is, and collude to ignore, nitpick, or just be _uninterested_ in any fact or line of argument that doesn't fit. I want to distinguish between direct ideological conformity enforcement attempts, and people not living up to their usual epistemic standards in response to ideological conformity enforcement. Especially compared to normal Berkeley, I had to give the Berkeley "rationalists" credit for being very good at free speech norms. (I'm not sure I would be saying this in the possible world where Scott Alexander didn't have a [traumatizing experience with social justice in college](https://slatestarcodex.com/2014/01/12/a-response-to-apophemi-on-triggers/), causing him to dump a ton of [anti-social-justice](https://slatestarcodex.com/tag/things-i-will-regret-writing/), [pro-argumentative-charity](https://slatestarcodex.com/2013/02/12/youre-probably-wondering-why-ive-called-you-here-today/) antibodies into the "rationalist" water supply after he became our subculture's premier writer. But it was true in _our_ world.) I didn't want to fall into the [bravery-debate](http://slatestarcodex.com/2013/05/18/against-bravery-debates/) trap of, "Look at me, I'm so heroically persecuted, therefore I'm right (therefore you should have sex with me)". I wasn't angry at the "rationalists" for silencing me (which they didn't); I was angry at them for making bad arguments and systematically refusing to engage with the obvious counterarguments. @@ -439,7 +439,7 @@ I did eventually get some dayjob work done that night, but I didn't finish the w I sent an email explaining this to Scott and my posse and two other friends (Subject: "predictably bad ideas"). -Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc'ing my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist it's okay to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is okay, why didn't that generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism (!?!?) when I was trying to use reason to make sense of the world was observably really bad for _my_ sanity! +Lying down didn't work. So at 5:26 _a.m._, I sent an email to Scott cc'ing my posse plus Anna about why I was so mad (both senses). I had a better draft sitting on my desktop at home, but since I was here and couldn't sleep, I might as well type this version (Subject: "five impulsive points, hastily written because I just can't even (was: Re: predictably bad ideas)"). Scott had been continuing to insist it's okay to gerrymander category boundaries for trans people's mental health, but there were a few things I didn't understand. If creatively reinterpreting the meanings of words because the natural interpretation would make people sad is okay, why didn't that generalize to an argument in favor of _outright lying_ when the truth would make people sad? The mind games seemed crueler to me than a simple lie. Also, if "mental health benefits for trans people" matter so much, then why didn't _my_ mental health matter? Wasn't I trans, sort of? Getting shut down by appeal-to-utilitarianism when I was trying to use reason to make sense of the world was observably really bad for _my_ sanity! Also, Scott had asked me if it wouldn't be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and I had rejected it because of gender stuff. But the original reason it had ever seemed remotely plausible that we would create Utopia forever wasn't "because we're us, the world-saving good guys," but because we were going to perfect an art of _systematically correct reasoning_. If we weren't going to do systematically correct reasoning because that would make people sad, then that undermined the reason that it was plausible that we would create Utopia forever. @@ -451,7 +451,7 @@ That seemed a little harsh on Scott to me. At 6:14 _a.m._ and 6:21 _a.m._, I wro Michael was _furious_ with me. ("What the FUCK Zack!?! Calling now," he emailed me at 6:18 _a.m._) I texted and talked with him on my train ride home. He seemed to have a theory that people who are behaving badly, as Scott was, will only change when they see a victim who is being harmed. Me escalating and then immediately deescalating just after Michael came to help was undermining the attempt to force an honest confrontation, such that we could _get_ to the point of having a Society with morality or punishment. -Anyway, I did get to my apartment and sleep for a few hours. One of the other friends I had cc'd on some of the emails, whom I'll call "Susan", came to visit me later that morning with her 2½-year-old son—I mean, her son at the time. +Anyway, I did get to my apartment and sleep for a few hours. One of the other friends I had cc'd on some of the emails, whom I'll call "Meredith", came to visit me later that morning with her 2½-year-old son—I mean, her son at the time. (Incidentally, the code that I had written intermittently between 11 _p.m._ and 4 _a.m._ was a horrible bug-prone mess, and the company has been paying for it ever since.) @@ -507,7 +507,7 @@ I told Scott I would send him one more email with a piece of evidence about how Concerning what others were thinking: on Discord in January, Kelsey Piper had told me that everyone else experienced their disagreement with me as being about where the joints are and which joints are important, where usability for humans was a legitimate criterion of importance, and it was annoying that I thought they didn't believe in carving reality at the joints at all and that categories should be whatever makes people happy. -I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey _thought_ she agreed with Scott, but actually didn't, that sham consenus was a bad sign for our collective sanity, wasn't it? +I [didn't want to bring it up at the time because](https://twitter.com/zackmdavis/status/1088459797962215429) I was so overjoyed that the discussion was actually making progress on the core philosophy-of-language issue, but Scott _did_ seem to be pretty explicit that his position was about happiness rather than usability? If Kelsey _thought_ she agreed with Scott, but actually didn't, that sham consensus was a bad sign for our collective sanity, wasn't it? As for the parable about orcs, I thought it was significant that Scott chose to tell the story from the standpoint of non-orcs deciding what [verbal behaviors](https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password) to perform while orcs are around, rather than the standpoint of the orcs themselves. For one thing, how do you _know_ that serving evil-Melkor is a life of constant torture? Is it at all possible, in the bowels of Christ, that someone has given you misleading information about that? @@ -604,7 +604,7 @@ I guess in retrospect, the outcome does seem kind of obvious—that it should ha But it's only "obvious" if you take as a given that Yudkowsky is playing a savvy Kolmogorov complicity strategy like any other public intellectual in the current year. -Maybe this seems banal if you haven't spent your entire adult life in his robot cult. From anyone else in the world, I wouldn't have had a problem with the "hill of validity in defense of meaning" thread—I would have respected it as a solidly above-average philosophy performance before [setting the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to) on the author and getting on with my day. But since I _did_ spend my entire adult life in Yudkowsky's robot cult, trusting him the way a Catholic trusts the Pope, I _had_ to assume that it was an "honest mistake" in his rationality lessons, and that honest mistakes could be honestly corrected if someone put in the effort to explain the problem. The idea that Eliezer Yudkowsky was going to behave just as badly as any other public intellectual in the current year, was not really in my hypothesis space. +Maybe this seems banal if you haven't spent your entire adult life in his robot cult. From anyone else in the world, I wouldn't have had a problem with the "hill of validity in defense of meaning" thread—I would have respected it as a solidly above-average philosophy performance before [setting the bozo bit](https://en.wikipedia.org/wiki/Bozo_bit#Dismissing_a_person_as_not_worth_listening_to) on the author and getting on with my day. But since I _did_ spend my entire adult life in Yudkowsky's robot cult, trusting him the way a Catholic trusts the Pope, I _had_ to assume that it was an "honest mistake" in his rationality lessons, and that honest mistakes could be honestly corrected if someone put in the effort to explain the problem. The idea that Eliezer Yudkowsky was going to behave just as badly as any other public intellectual in the current year was not really in my hypothesis space. Ben shared the account of our posse's email campaign with someone who commented that I had "sacrificed all hope of success in favor of maintaining his own sanity by CC'ing you guys." That is, if I had been brave enough to confront Yudkowsky by myself, maybe there was some hope of him seeing that the game he was playing was wrong. But because I was so cowardly as to need social proof (because I believed that an ordinary programmer such as me was as a mere worm in the presence of the great Eliezer Yudkowsky), it must have just looked to him like an illegible social plot originating from Michael. diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 159288f..1bdb8f8 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1,14 +1,12 @@ -_ scan through pt. 1½ and extract highlights to include - pt. 2 near editing tier— +_ alphabet bump +_ Ziz and Gwen archive +_ Ziz chocolate anecdote _ explain Michael's gaslighting charge, using the "bowels of Christ" language _ the function of privacy norms is to protect you from people who want to selectively reveal information to hurt you, so it makes sense that I'm particularly careful about Yudkowsky's privacy and not Scott's, because I totally am trying to hurt Yudkowsky (this also protects me from the charge that by granting more privacy to Yudkowsky than Scott, I'm implying that Yudkowsky said something more incriminating; the difference in treatment is about _me_ and my expectations, rather than what they may or may not have said when I tried emailing them); I want it to be clear that I'm attacking him but not betraying him _ mention my "trembling hand" history with secrets, not just that I don't like it _ Eric Weinstein, who was not making this mistake -_ "A Hill": I claim that I'm not doing much psychologizing because implausible to be simultaenously savvy enough to say this, and naive enough to not be doing so knowingly - -to send (Discord) only needed for pt. 2— -_ Alicorn +_ I claim that I'm not doing much psychologizing because implausible to be simultaenously savvy enough to say this, and naive enough to not be doing so knowingly ------ @@ -2670,3 +2668,5 @@ If someone ran over a pedestrian in their car, at the trial you would actually a When I mentioned re-reading Moldbug on "ignoble privilege", "Thomas" said that it was a reason not to feel the need to seek the approval of women, who had not been ennobled by living in an astroturfed world where the traditional (_i.e._, evolutionarily stable) strategies of relating had been relabeled as oppression. The chip-on-her-shoulder effect was amplified in androgynous women. (Unfortunately, the sort of women I particularly liked.) He advised me that if I did find an androgynous woman I was into, I shouldn't treat her as a moral authority. Doing what most sensitive men thought of as equality degenerated into female moral superiority, which wrecks the relationship in a feedback loop of testing and resentment. (Women want to win arguments in the moment, but don't actually want to lead the relationship.) Thus, a strange conclusion: to have an egalitarian heterosexual relationship, the man needs to lead the relationship _into_ equality; a dab of patriarchy works better than none. + +https://www.spectator.co.uk/article/should-we-fear-ai-james-w-phillips-and-eliezer-yudkowsky-in-conversation/ diff --git a/notes/notes.txt b/notes/notes.txt index 81b6944..07bd15b 100644 --- a/notes/notes.txt +++ b/notes/notes.txt @@ -3305,3 +3305,6 @@ https://link.springer.com/article/10.1007/s10508-022-02482-6 I agree that people who have prior reasons to be skeptical of survey evidence should be upfront about those reasons, rather than opportunistically objecting to particular questions when that's not their [true rejection](https://www.lesswrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection). https://unherd.com/thepost/calls-for-violence-in-the-trans-debate-only-come-from-one-side/ + +Blanchard: "What I think is that people are born with predispositions or vulnerabilities to a kind of erotic miss-learning" quoted in https://since2010.substack.com/p/is-autogynephilia-innate + -- 2.17.1