From 7a9fb12b5c095640b4992b7953e021e29286520a Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Fri, 24 Nov 2023 20:08:04 -0800 Subject: [PATCH] =?utf8?q?memoir:=20pt.=203=E2=80=934=20edits?= MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit --- ...xhibit-generally-rationalist-principles.md | 50 ++++++++++------- .../if-clarity-seems-like-death-to-them.md | 14 ++--- notes/memoir-sections.md | 54 +++++++++++++------ notes/memoir_wordcounts.csv | 3 +- notes/tweet_pad.txt | 6 +-- 5 files changed, 80 insertions(+), 47 deletions(-) diff --git a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md index 06d626f..eb78760 100644 --- a/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md +++ b/content/drafts/agreeing-with-stalin-in-ways-that-exhibit-generally-rationalist-principles.md @@ -31,9 +31,9 @@ But the sense in which Alexander "aligned himself with Murray" in ["Three Great It _is_ a weirdly brazen invalid _inference_. But by calling it a "falsehood", Alexander heavily implies this means he disagrees with Murray's offensive views on race: in invalidating the _Times_'s charge of guilt-by-association with Murray, Alexander validates Murray's guilt. -But anyone who's read _and understood_ Alexander's work should be able to infer that Scott probably finds it plausible that there exist genetically-mediated ancestry-group differences in socially-relevant traits (as a value-free matter of empirical science with no particular normative implications): for example, his [review of Judith Rich Harris](https://archive.ph/Zy3EL) indicates that he accepts the evidence from [twin studies](/2020/Apr/book-review-human-diversity/#twin-studies) for individual behavioral differences having a large genetic component, and section III. of his ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) indicates that he accepts genetics as an explantion for group differences in the particular case of cognitive ability in Ashkenazi Jews.[^murray-alignment] +But anyone who's read _and understood_ Alexander's work should be able to infer that Scott probably finds it plausible that there exist genetically-mediated ancestry-group differences in socially-relevant traits (as a value-free matter of empirical science with no particular normative implications): for example, his [review of Judith Rich Harris](https://archive.ph/Zy3EL) indicates that he accepts the evidence from [twin studies](/2020/Apr/book-review-human-diversity/#twin-studies) for individual behavioral differences having a large genetic component, and section III. of his ["The Atomic Bomb Considered As Hungarian High School Science Fair Project"](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) indicates that he accepts genetics as an explantion for group differences in the particular case of Ashkenazi Jewish intelligence.[^murray-alignment] -[^murray-alignment]: And as far as aligning himself with Murray more generally, it's notable that Alexander had tapped Murray for Welfare Czar in [a hypothetical "If I were president" Tumblr post](https://archive.vn/xu7PX). +[^murray-alignment]: As far as aligning himself with Murray more generally, it's notable that Alexander had tapped Murray for Welfare Czar in [a hypothetical "If I were president" Tumblr post](https://archive.vn/xu7PX). There are a lot of standard caveats that go here which Scott would no doubt scrupulously address if he ever chose to tackle the subject of genetically-mediated group differences in general: [the mere existence of a group difference in a "heritable" trait doesn't itself imply a genetic cause of the group difference (because the groups' environments could also be different)](/2020/Apr/book-review-human-diversity/#heritability-caveats). It is without a doubt entirely conceivable that the Ashkenazi IQ advantage is real and genetic, but black–white IQ gap is fake and environmental.[^bet] Moreover, group averages are just that—averages. They don't imply anything about individuals and don't justify discrimination against individuals. @@ -49,7 +49,7 @@ It's because one of the things I noticed while trying to make sense of why my en Because of the particular historical moment in which we live, we end up facing pressure from progressives, because—whatever our object-level beliefs about (say) [sex, race, and class differences](/2020/Apr/book-review-human-diversity/)—and however much most of us would prefer not to talk about them—on the _meta_ level, our creed requires us to admit it's an empirical question, not a moral one—and that [empirical questions have no privileged reason to admit convenient answers](https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god). -I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than having to do with American politics or "the left" in particular. In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a [positive technical discipline](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) adjacent to game theory, not yoked to a particular normative agenda).[^logical-induction] +I view this conflict as entirely incidental, something that [would happen in some form in any place and time](https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone), rather than having to do with American politics or "the left" in particular. In a Christian theocracy, our analogues would get in trouble for beliefs about evolution; in the old Soviet Union, our analogues would get in trouble for [thinking about market economics](https://slatestarcodex.com/2014/09/24/book-review-red-plenty/) (as a positive [technical](https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics#Proof_of_the_first_fundamental_theorem) [discipline](https://www.lesswrong.com/posts/Gk8Dvynrr9FWBztD4/what-s-a-market) adjacent to game theory, not yoked to a particular normative agenda).[^logical-induction] [^logical-induction]: I wonder how hard it would have been to come up with MIRI's [logical induction result](https://arxiv.org/abs/1609.03543) (which describes an asymptotic algorithm for estimating the probabilities of mathematical truths in terms of a betting market composed of increasingly complex traders) in the Soviet Union. @@ -57,7 +57,7 @@ Incidental or not, the conflict is real, and everyone smart knows it—even if i So the _New York Times_ implicitly accuses us of being racists, like Charles Murray, and instead of pointing out that being a racist _like Charles Murray_ is the obviously correct position that sensible people will tend to reach in the course of being sensible, we disingenuously deny everything.[^deny-everything] -[^deny-everything]: In January 2023, when Nick Bostrom [preemptively apologized for a 26-year-old email to the Extropians mailing list](https://nickbostrom.com/oldemail.pdf) that referenced the IQ gap and mentioned a slur, he had [some](https://forum.effectivealtruism.org/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi) [detractors](https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old) and a [few](https://forum.effectivealtruism.org/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi?commentId=h9gdA4snagQf7bPDv) [defenders](https://forum.effectivealtruism.org/posts/NniTsDNQQo58hnxkr/my-thoughts-on-bostrom-s-apology-for-an-old-email), but I don't recall seeing anyone defending the 1997 email itself. +[^deny-everything]: In January 2023, when Nick Bostrom [preemptively apologized for a 26-year-old email to the Extropians mailing list](https://nickbostrom.com/oldemail.pdf) that referenced the IQ gap and mentioned a slur, he had [some](https://forum.effectivealtruism.org/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi) [detractors](https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old) and a [few](https://ea.greaterwrong.com/posts/Riqg9zDhnsxnFrdXH/nick-bostrom-should-step-down-as-director-of-fhi/comment/h9gdA4snagQf7bPDv) [defenders](https://forum.effectivealtruism.org/posts/NniTsDNQQo58hnxkr/my-thoughts-on-bostrom-s-apology-for-an-old-email), but I don't recall seeing anyone defending the 1996 email itself. But if you're [familiar with the literature](/2020/Apr/book-review-human-diversity/#the-reason-everyone-and-her-dog-is-still-mad) and understand the [use–mention distinction](https://en.wikipedia.org/wiki/Use%E2%80%93mention_distinction), the literal claims in [the original email](https://nickbostrom.com/oldemail.pdf) are entirely reasonable. (There are additional things one could say about [what pro-social functions are being served by](/2020/Apr/book-review-human-diversity/#schelling-point-for-preventing-group-conflicts) the taboos against what the younger Bostrom called "the provocativeness of unabashed objectivity", which would make for fine mailing-list replies, but the original email can't be abhorrent simply for failing to anticipate all possible counterarguments.) @@ -69,9 +69,9 @@ As it happens, in our world, the defensive coverup consists of _throwing me unde [^survey-data]: We go from 89.2% male in the [2011 _Less Wrong_ survey](https://www.lesswrong.com/posts/HAEPbGaMygJq8L59k/2011-survey-results) to a virtually unchanged 88.7% male on the [2020 _Slate Star Codex_ survey](https://slatestarcodex.com/2020/01/20/ssc-survey-results-2020/)—although the [2020 EA survey](https://forum.effectivealtruism.org/posts/ThdR8FzcfA8wckTJi/ea-survey-2020-demographics) says only 71% male, so it depends on how you draw the category boundaries of "we." -But _trans!_ We have plenty of those! In [the same blog post in which Scott Alexander characterized rationalism as the belief that Eliezer Yudkowsky is the rightful caliph](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), he also named "don’t misgender trans people" as one of the group's distinguishing norms. Two years later, he joked that ["We are solving the gender ratio issue one transition at a time"](https://slatestarscratchpad.tumblr.com/post/142995164286/i-was-at-a-slate-star-codex-meetup). +But _trans!_ We have plenty of those! In [the same blog post in which Scott Alexander characterized rationalism as the belief that Eliezer Yudkowsky is the rightful caliph](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), he also named "don't misgender trans people" as one of the group's distinguishing norms. Two years later, he joked that ["We are solving the gender ratio issue one transition at a time"](https://slatestarscratchpad.tumblr.com/post/142995164286/i-was-at-a-slate-star-codex-meetup). -Having plenty of trans people means having plenty of high-ranking [progressive stack](https://en.wikipedia.org/wiki/Progressive_stack) members to trot out as a shield to prove that we're not counter-revolutionary right-wing Bad Guys. Thus, [Jacob Falkovich noted](https://twitter.com/yashkaf/status/1275524303430262790) (on 23 June 2020, just after _Slate Star Codex_ went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders", and Scott Aaronson [noted (in commentary on the February 2021 _Times_ article) that](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" should have "complicated the picture" of our portrayal as anti-feminist. +The benefit having plenty of trans people is that high-ranking members of the [progressive stack](https://en.wikipedia.org/wiki/Progressive_stack) can be trotted out as a shield to prove that we're not counter-revolutionary right-wing Bad Guys. Thus, [Jacob Falkovich noted](https://twitter.com/yashkaf/status/1275524303430262790) (on 23 June 2020, just after _Slate Star Codex_ went down), "The two demographics most over-represented in the SlateStarCodex readership according to the surveys are transgender people and Ph.D. holders", and Scott Aaronson [noted (in commentary on the February 2021 _Times_ article) that](https://www.scottaaronson.com/blog/?p=5310) "the rationalist community's legendary openness to alternative gender identities and sexualities" should have "complicated the picture" of our portrayal as anti-feminist. Even the haters grudgingly give Alexander credit for ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/): ["I strongly disagree that one good article about accepting transness means you get to walk away from writing that is somewhat white supremacist and quite fascist without at least awknowledging you were wrong"](https://archive.is/SlJo1), wrote one. @@ -81,7 +81,7 @@ But this being the case, _I have no reason to participate in the cover-up_. What On 17 February 2021, Topher Brennan [claimed that](https://web.archive.org/web/20210217195335/https://twitter.com/tophertbrennan/status/1362108632070905857) Scott Alexander "isn't being honest about his history with the far-right", and published [an email he had received from Scott in February 2014](https://emilkirkegaard.dk/en/2021/02/backstabber-brennan-knifes-scott-alexander-with-2014-email/), on what Scott thought some neoreactionaries were getting importantly right. -I think that to people who have read _and understood_ Alexander's work, there is nothing surprising or scandalous about the contents of the email. In the email, he said that biologically-mediated group differences are probably real and that neoreactionaries were the only people discussing the object-level hypotheses or the meta-level question of why our Society's collective epistemology is obfuscating the matter. He said that reactionaries as a whole generate a lot of garbage but that he trusted himself to sift through the noise and extract the novel insights. (In contrast, [RationalWiki](https://rationalwiki.org/wiki/Main_Page) didn't generate garbage, but by hewing so closely to the mainstream, it also didn't say much that Alexander didn't already know.) The email contains some details that Alexander hadn't already blogged about—most notably the section headed "My behavior is the most appropriate response to these facts", explaining his social strategizing _vis á vis_ the neoreactionaries and his own popularity—but again, none of it is really surprising if you know Scott from his writing. +I think that to people who have read _and understood_ Alexander's work, there is nothing surprising or scandalous about the contents of the email. In the email, he said that biologically-mediated group differences are probably real and that neoreactionaries were the only people discussing the object-level hypotheses or the meta-level question of why our Society's intelligentsia is obfuscating the matter. He said that reactionaries as a whole generate a lot of garbage but that he trusted himself to sift through the noise and extract the novel insights. (In contrast, [RationalWiki](https://rationalwiki.org/wiki/Main_Page) didn't generate garbage, but by hewing so closely to the mainstream, it also didn't say much that Alexander didn't already know.) The email contains some details that Alexander hadn't already blogged about—most notably the section headed "My behavior is the most appropriate response to these facts", explaining his social strategizing _vis á vis_ the neoreactionaries and his own popularity—but again, none of it is surprising if you know Scott from his writing. I think the main reason someone _would_ consider the email a scandalous revelation is if they hadn't read _Slate Star Codex_ that deeply—if their picture of Scott Alexander as a political writer was, "that guy who's so committed to charitable discourse that he [wrote up an explanation of what _reactionaries_ (of all people) believe](https://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/)—and then, of course, [turned around and wrote up the definitive explanation of why they're totally wrong and you shouldn't pay them any attention](https://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/)." As a first approximation, it's not a terrible picture. But what it misses—what _Scott_ knows—is that charity isn't about putting on a show of superficially respecting your ideological opponent, before concluding (of course) that they were wrong and you were right all along in every detail. Charity is about seeing what the other guy is getting _right_. @@ -137,11 +137,11 @@ Yudkowsky begins by setting the context of "[h]aving received a bit of private p But the reason he got a bit [("a bit")](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/) of private pushback was _because_ the original "hill of meaning" thread was so blatantly optimized to intimidate and delegitimize people who want to use language to reason about biological sex. The pushback wasn't about using trans people's preferred pronouns (I do that, too), or about not wanting pronouns to imply sex (sounds fine, if we were in the position of defining a conlang from scratch); the problem is using an argument that's ostensibly about pronouns to sneak in an implicature (["Who competes in sports segregated around an Aristotelian binary is a policy question [ ] that I personally find very humorous"](https://twitter.com/ESYudkowsky/status/1067490362225156096)) that it's dumb and wrong to want to talk about the sense in which trans women are male and trans men are female, as a fact about reality that continues to be true even if it hurts someone's feelings, and even if policy decisions made on the basis of that fact are not themselves facts (as if anyone had doubted this). -In that context, it's revealing that in this February 2021 post attempting to explain why the November 2018 thread seemed like a reasonable thing to say, Yudkowsky doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size." +In that context, it's revealing that in this February 2021 post attempting to explain why the November 2018 thread seemed like a reasonable thing to say, Yudkowsky doubles down on going out of his way to avoid acknowledging the reality of biological of sex. He learned nothing! We're told that the default pronoun for those who haven't asked goes by "gamete size", on the grounds that "logically rude to demand that other people use only your language system and interpretation convention in order to communicate, in advance of them having agreed with you about the clustering thing." -But I've never measured how big someone's gametes are, have you? We only infer whether strangers' bodies are configured to produce small or large gametes by observing [a variety of correlated characteristics](https://en.wikipedia.org/wiki/Secondary_sex_characteristic). Furthermore, for trans people who don't pass but are visibly trying to, one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex. +But I've never measured how big someone's gametes are, have you? We only infer whether strangers' bodies are configured to produce small or large gametes by observing [a variety of correlated characteristics](https://en.wikipedia.org/wiki/Secondary_sex_characteristic). Thus, the complaint that sex-based pronoun conventions rudely demand that people "agree[ ] [...] about the clustering thing" is hypocritical, because Yudkowsky's proposal also expects people to agree about the clustering thing. Furthermore, for trans people who don't pass but are visibly trying to (without having explicitly asked for pronouns), one presumes that we're supposed to use the pronouns corresponding to their gender presentation, not their natal sex? -Thus, Yudkowsky's "default for those-who-haven't-asked that goes by gamete size" clause can't be taken literally. The only way I can make sense of it is to interpret it as a flailing attempt to gesture at the prevailing reality that people are good at noticing what sex other people are, but that we want to be kind to people who are trying to appear to be the other sex, without having to admit that that's what's going on. +Thus, Yudkowsky's "default for those-who-haven't-asked that goes by gamete size" proposal can't be taken literally. The only way I can make sense of it is to interpret it as a flailing attempt to gesture at the prevailing reality that people are good at noticing what sex other people are, but that we want to be kind to people who are trying to appear to be the other sex, without having to admit that that's what's going on. One could argue that this is hostile nitpicking on my part: that the use of "gamete size" as a metonym for sex here is either an attempt to provide an unambiguous definition (because if you said _sex_, _female_, or _male_, someone could ask what you meant by that), or that it's at worst a clunky choice of words, not an intellectually substantive decision that can be usefully critiqued. @@ -207,7 +207,7 @@ This is a rationality skill. Alleva had a theory about herself, and then she rev This also isn't a particularly advanced rationality skill. This is very basic—something novices should grasp during their early steps along the Way. -Back in 2009, in the early days of _Less Wrong_, when I hadn't yet grown out of [my teenage ideological fever dream of psychological sex differences denialism](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), there was a poignant exchange in the comment section between me and Yudkowsky. Yudkowsky had claimed that he had ["never known a man with a true female side, and [...] never known a woman with a true male side, either as authors or in real life."](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2) Offended at our leader's sexism, I [passive-aggressively asked him to elaborate](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way?commentId=AEZaakdcqySmKMJYj), and as part of [his response](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF), he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [_sic_]." +Back in 2009, in the early days of _Less Wrong_, when I hadn't yet grown out of [my teenage ideological fever dream of psychological sex differences denialism](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#antisexism), there was a poignant exchange in the comment section between me and Yudkowsky. Yudkowsky had claimed that he had ["never known a man with a true female side, and [...] never known a woman with a true male side, either as authors or in real life."](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/K8YXbJEhyDwSusoY2) Offended at our leader's sexism, I [passive-aggressively asked him to elaborate](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/AEZaakdcqySmKMJYj), and as part of [his response](https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/W4TAp4LuW3Ev6QWSF), he mentioned that he "sometimes wish[ed] that certain women would appreciate that being a man is at least as complicated and hard to grasp and a lifetime's work to integrate, as the corresponding fact of feminity [_sic_]." [I replied](https://www.lesswrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/7ZwECTPFTLBpytj7b) (bolding added): @@ -219,7 +219,7 @@ It would seem that in the current year, that culture is dead—or at least, if i At this point, some readers might protest that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post also explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I agree that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he "really meant" to communicate the reading that does make sense, rather than the reading that doesn't make sense? -I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just too talented of a writer for me to excuse his words as an artifact of accidentally unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about at the risk of offending someone's "not lik[ing] to be tossed into a Male Bucket or Female Bucket", I think it's _deliberately_ ambiguous. +I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just too talented of a writer for me to excuse his words as an artifact of accidentally unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about at the risk of offending someone's "not lik[ing] to be tossed into a Male Bucket or Female Bucket", I think it's deliberately ambiguous. When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some channel other than their words straightforwardly reflecting reality. Someone who was actually stupid wouldn't be able to generate text so carefully fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point of the post is to pander to the biological sex denialists in his robot cult, without technically saying anything unambiguously false that someone could call out as a "lie." @@ -464,7 +464,7 @@ There are a number of things that could be said to this,[^number-of-things] but [^number-of-things]: Note the striking contrast between ["A Rational Argument"](https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument), in which the Yudkowsky of 2007 wrote that a campaign manager "crossed the line [between rationality and rationalization] at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it"; and these 2021 Tweets, in which Yudkowsky seems completely nonchalant about "not have been as willing to tweet a truth helping" one side of a cultural dispute, because "this battle just isn't that close to the top of [his] priority list". Well, sure! Any hired campaign manager could say the same: helping the electorate make an optimally informed decision just isn't that close to the top of their priority list, compared to getting paid. - Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors in both political camps. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to make proportionately more errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", judging by local demographics, it seems much more likely to apply to trans women themselves, than their critics! + Yudkowsky's claim to have been focused on nudging people's cognition towards sanity seems dubious: if you're focused on sanity, you should be spontaneously noticing sanity errors in both political camps. (Moreover, if you're living in what you yourself describe as a "half-Stalinist environment", you should expect your social environment to make proportionately more errors on the "pro-Stalin" side.) As for the rationale that "those people might matter to AGI someday", [judging by local demographics](/2017/Jan/from-what-ive-tasted-of-desire/), it seems much more likely to apply to trans women themselves, than their critics! The battle that matters—and I've been very explicit about this, for years—is over this proposition eloquently [stated by Scott Alexander in November 2014](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (redacting the irrelevant object-level example): @@ -474,13 +474,15 @@ This is a battle between Feelings and Truth, between Politics and Truth. In order to take the side of Truth, you need to be able to [tell Joshua Norton that he's not actually Emperor of the United States (even if it hurts him)](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/#emperor-norton). -You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class, is evidence that his study methods aren't doing what he thought they were (even if it hurts him). +You need to be able to tell a prideful autodidact that the fact that he's failing quizzes in community college differential equations class is evidence that his study methods aren't doing what he thought they were (even if it hurts him). And you need to be able to say, in public, that trans women are male and trans men are female with respect to a concept of binary sex that encompasses the many traits that aren't affected by contemporary surgical and hormonal interventions (even if it hurts someone who does not like to be tossed into a Male Bucket or a Female Bucket as it would be assigned by their birth certificate, and—yes—even if it probabilistically contributes to that person's suicide). If you don't want to say those things because hurting people is wrong, then you have chosen Feelings. -Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only speaking in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/). You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is alright. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. +Scott Alexander chose Feelings, but I can't really hold that against him, because Scott is [very explicit about only speaking in the capacity of some guy with a blog](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/).[^hexaco] You can tell from his writings that he never wanted to be a religious leader; it just happened to him on accident because he writes faster than everyone else. I like Scott. Scott is alright. I feel sad that such a large fraction of my interactions with him over the years have taken such an adversarial tone. + +[^hexaco]: The authors of the [HEXACO personality model](https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure) may have gotten something importantly right in [grouping "honesty" and "humility" as a single factor](https://en.wikipedia.org/wiki/Honesty-humility_factor_of_the_HEXACO_model_of_personality). Eliezer Yudkowsky did not _unambiguously_ choose Feelings. He's been very careful with his words to strategically mood-affiliate with the side of Feelings, without consciously saying anything that he consciously knows to be unambiguously false. And the reason I can hold it against _him_ is because Eliezer Yudkowsky does not identify as just some guy with a blog. Eliezer Yudkowsky is _absolutely_ trying to be a religious leader. He markets himself as a master of the hidden Bayesian structure of cognition, who ["aspires to make sure [his] departures from perfection aren't noticeable to others"](https://twitter.com/ESYudkowsky/status/1384671335146692608). @@ -488,12 +490,24 @@ In making such boasts, I think Yudkowsky is opting in to being held to higher st If Eliezer Yudkowsky gets something wrong when I was trusting him to be right, and refuses to acknowledge corrections (in the absence of an unsustainable 21-month nagging campaign) and keeps inventing new galaxy-brained ways to be wrong in the service of his political agenda of being seen to agree with Stalin without technically lying, then I think I _am_ the victim of false advertising. His marketing bluster was optimized to trick people like me into trusting him, even if my being dumb enough to believe him is on me. -Because, I did, actually, trust him. Back in 'aught-nine when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a good teacher.) +Because, I did, actually, trust him. Back in 2009 when _Less Wrong_ was new, we had a thread of hyperbolic ["Eliezer Yudkowsky Facts"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) (in the style of [Chuck Norris facts](https://en.wikipedia.org/wiki/Chuck_Norris_facts)). And of course, it was a joke, but the hero-worship that make the joke funny was real. (You wouldn't make those jokes for your community college physics teacher, even if he was a good teacher.) -["Never go in against Eliezer Yudkowsky when anything is on the line"](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts?commentId=Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to. +["Never go in against Eliezer Yudkowsky when anything is on the line"](https://www.greaterwrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts/comment/Aq9eWJmK6Liivn8ND), said one of the facts—and back then, I didn't think I would _need_ to. [Yudkowsky writes](https://twitter.com/ESYudkowsky/status/1096769579362115584): > When an epistemic hero seems to believe something crazy, you are often better off questioning "seems to believe" before questioning "crazy", and both should be questioned before shaking your head sadly about the mortal frailty of your heroes. -I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) shaking my head sadly about the mortal frailty of my former hero, and shaking my head in disgust at his craven duplicity. If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_. +I notice that this advice leaves out a possibility: that the "seems to believe" is a deliberate show (judged to be personally prudent and not community-harmful), rather than a misperception on your part. I am left shaking my head in a [weighted average of](https://www.lesswrong.com/posts/y4bkJTtG3s5d6v36k/stupidity-and-dishonesty-explain-each-other-away) sadness about the mortal frailty of my former hero, and disgust at his craven duplicity. **If Eliezer Yudkowsky can't _unambigously_ choose Truth over Feelings, _then Eliezer Yudkowsky is a fraud_.** + +A few clarifications are in order here. First, as with "bad faith", this usage of "fraud" isn't a meaningless [boo light](https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights). I specifically and literally mean it in [_Merriam-Webster_'s sense 2.a., "a person who is not what he or she pretends to be"](https://www.merriam-webster.com/dictionary/fraud)—and I think I've made my case. Someone who disagrees with my assessment needs to argue that I've gotten some specific thing wrong, [rather than objecting on procedural grounds](https://www.lesswrong.com/posts/pkaagE6LAsGummWNv/contra-yudkowsky-on-epistemic-conduct-for-author-criticism). + +Second, it's a conditional: _if_ Yudkowsky can't unambiguously choose Truth over Feelings, _then_ he's a fraud. + +[TODO: explain how he could come clean without making it sound like a negotiation, and why it shouldn't be a negotiation—if he had the courage to be loud and clear that people who are unhappy about what color their hair is _as a fact_, even if it hurts their feelings] + +He probably won't. (We've already seen from his behavior that he doesn't give a shit about people like me respecting his intellectual integrity, and it's not clear why me telling this Whole Dumb Story would change his mind about that.) + +Third, given that "fraud" is a literal description and not just an evaluative boo light, [TODO: bridge] + +If being a fraud were instrumentally useful for saving the world, maybe being a fraud would be the right thing to do? More on this in the next post. (To be continued.) diff --git a/content/drafts/if-clarity-seems-like-death-to-them.md b/content/drafts/if-clarity-seems-like-death-to-them.md index 4e3ad74..b042649 100644 --- a/content/drafts/if-clarity-seems-like-death-to-them.md +++ b/content/drafts/if-clarity-seems-like-death-to-them.md @@ -2,7 +2,7 @@ Title: If Clarity Seems Like Death to Them Author: Zack M. Davis Date: 2023-07-01 11:00 Category: commentary -Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors +Tags: bullet-biting, cathartic, categorization, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors Status: draft > "—but if one hundred thousand [normies] can turn up, to show their support for the [rationalist] community, why can't you?" @@ -129,13 +129,13 @@ Such a trainwreck ensued that the mods manually [moved the comments to their own On 31 May 2019, a [draft of a new _Less Wrong_ FAQ](https://www.lesswrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for) included a link to ["The Categories Were Made for Man, Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) as one of Scott Alexander's best essays. I argued that it would be better to cite almost literally any other _Slate Star Codex_ post (most of which, I agreed, were exemplary). I claimed that the following disjunction was true: either Alexander's claim that "There's no rule of rationality saying that [one] shouldn't" "accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it'll save someone's life" was a blatant lie, or I could call it a blatant lie because no rule of rationality says I shouldn't draw the category boundaries of "blatant lie" that way. Ruby Bloom, the new moderator who wrote the draft, [was persuaded](https://www.greaterwrong.com/posts/MqrzczdGhQCRePgqN/feedback-requested-draft-of-a-new-about-welcome-page-for/comment/oBDjhXgY5XtugvtLT), and "... Not Man for the Categories" was not included in the final FAQ. Another "victory." -But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work s. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software." +But "victories" weren't particularly comforting when I resented this becoming a political slapfight at all. I wrote to Anna and Steven Kaas (another old-timer who I was trying to "recruit" to my side of the civil war). In ["What You Can't Say"](http://www.paulgraham.com/say.html), Paul Graham had written, "The problem is, there are so many things you can't say. If you said them all you'd have no time left for your real work." But surely that depends on what your real work is. For someone like Paul Graham, whose goal was to make a lot of money writing software, "Don't say it" (except in this one meta-level essay) was probably the right choice. But someone whose goal is to improve our collective ability to reason should probably be doing more fighting than Paul Graham (although still preferably on the meta- rather than object-level), because political restrictions on speech and thought directly hurt the mission of "improve our collective ability to reason" in a way that they don't hurt the mission of "make a lot of money writing software." I said I didn't know if either of them had caught the "Yes Requires the Possibility" trainwreck, but wasn't it terrifying that the person who objected to my innocuous philosophy comment was a goddamned _MIRI research associate_? Not to demonize that commenter, because [I was just as bad (if not worse) in 2008](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#hair-trigger-antisexism). The difference was that in 2008, we had a culture that could beat it out of me. Steven objected that tractability and side effects matter, not just effect on the mission considered in isolation. For example, the Earth's gravitational field directly impedes NASA's mission, and doesn't hurt Paul Graham, but both NASA and Paul Graham should spend the same amount of effort trying to reduce the Earth's gravity (_viz._, zero). -I agreed that tractability needs to be addressed, but the situation felt analogous to being in [a coal mine in which my favorite of our canaries had just died](https://en.wikipedia.org/wiki/Sentinel_species). Caliphate officials (Yudkowsky, Alexander, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird. It's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue? +I agreed that tractability needs to be addressed, but the situation felt analogous to being in [a coal mine in which my favorite of our canaries had just died](https://en.wikipedia.org/wiki/Sentinel_species). Caliphate officials (Yudkowsky, Alexander, Anna) and loyalists (Steven) were patronizingly consoling me: sorry, I know you were really attached to that canary, but it's just a bird; it's not critical to the coal-mining mission. I agreed that I was unreasonably attached to that particular bird, but that's not why I expected _them_ to care. The problem was what the dead canary was evidence of: if you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) Ben and Michael and Jessica claimed to have spotted their own dead canaries. I felt like the old-timer Rationality Elders should have been able to get on the same page about the canary-count issue? Math and Wellness Month ended up being mostly a failure: the only math I ended up learning was [a fragment of group theory](http://zackmdavis.net/blog/2019/05/group-theory-for-wellness-i/) and [some probability/information theory](http://zackmdavis.net/blog/2019/05/the-typical-set/) that [later turned out to be deeply relevant to understanding sex differences](/2021/May/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems/#typical-point). So much for taking a break. @@ -341,9 +341,11 @@ Suppose there are five true heresies, but anyone who's on the record as believin [^implicit-understanding]: As I had [explained to him earlier](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#noncentral-fallacy), Alexander's famous [post on the noncentral fallacy](https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world) condemned the same shenanigans he praised in the context of gender identity: Alexander's examples of the noncentral fallacy had largely been arguable edge-cases of a negative-valence category being inappropriately framed as typical (abortion is murder, taxation is theft), but "trans women are women" was the same thing, but with a positive-valence category. - In ["Does the Glasgow Coma Scale exist? Do comas?"](https://slatestarcodex.com/2014/08/11/does-the-glasgow-coma-scale-exist-do-comas/) (published just three months before "... Not Man for the Categories"), Alexander defends the usefulness of "comas" and "intelligence" in terms of their predictive usefulness. (The post uses the terms "predict", "prediction", "predictive power", _&c._ 16 times.) He doesn't say that the Glasgow Coma Scale is justified because it makes people happy for comas to be defined that way, because that would be absurd. + In ["Does the Glasgow Coma Scale exist? Do Comas?"](https://slatestarcodex.com/2014/08/11/does-the-glasgow-coma-scale-exist-do-comas/) (published just three months before "... Not Man for the Categories"), Alexander defends the usefulness of "comas" and "intelligence" in terms of their predictive usefulness. (The post uses the terms "predict", "prediction", "predictive power", _&c._ 16 times.) He doesn't say that the Glasgow Coma Scale is justified because it makes people happy for comas to be defined that way, because that would be absurd. -Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech, and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the one-heresy-per-thinker equilibrium.) +Alexander (and Yudkowsky and Anna and the rest of the Caliphate) seemed to accept this as an inevitable background fact of existence, like the weather. But I saw a Schelling point off in the distance where us witches stick together for Free Speech,[^kolmogorov-common-interests-contrast] and it was tempting to try to jump there. (It would probably be better if there were a way to organize just the good witches, and exclude all the Actually Bad witches, but the [Sorites problem](https://plato.stanford.edu/entries/sorites-paradox/) on witch Badness made that hard to organize without falling back to the one-heresy-per-thinker equilibrium.) + +[^kolmogorov-common-interests-contrast]: The last of the original Sequences had included a post, ["Rationality: Common Interest of Many Causes"](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes) which argued that different projects should not regard themselves "as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support." It was striking that the "Kolmogorov Option"-era Caliphate took the opposite policy: throwing politically unpopular projects (autogynephlia- or human-biodiversity-realism) under the bus to protect its own status. Jessica thought my use of "heresy" was conflating factual beliefs with political movements. (There are no intrinsically "right wing" _facts_.) I agreed that conflating political positions with facts would be bad. I wasn't interested in defending the "alt-right" (whatever that means) broadly. But I had learned stuff from reading far-right authors [(most notably Mencius Moldbug)](/2023/Jul/blanchards-dangerous-idea-and-the-plight-of-the-lucid-crossdreamer/#unqualified-reservations) and from talking with "Thomas". I was starting to appreciate [what Michael had said about "Less precise is more violent" back in April](/2023/Jul/a-hill-of-validity-in-defense-of-meaning/#less-precise-is-more-violent) when I was talking about criticizing "rationalists". @@ -433,7 +435,7 @@ I didn't immediately have an answer for Abram, but I was grateful for the engage ------ -Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been to tell the story of the Category War while Glomarizing about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the /r/TheMotte culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water). +Also in November 2019, I wrote to Ben about how I was still stuck on writing the grief-memoir. My plan had been to tell the story of the Category War while [Glomarizing](https://en.wikipedia.org/wiki/Glomar_response) about the content of private conversations, then offer Scott and Eliezer pre-publication right of reply (because it's only fair to give your former-hero-current-[frenemies](https://en.wikipedia.org/wiki/Frenemy) warning when you're about to publicly call them intellectually dishonest), then share it to _Less Wrong_ and the [/r/TheMotte](https://www.themotte.org/) culture war thread, and then I would have the emotional closure to move on with my life (learn math, go to gym, chop wood, carry water). The reason it _should_ have been safe to write was because it's good to explain things. It should be possible to say, "This is not a social attack; I'm not saying 'rationalists Bad, Yudkowsky Bad'; I'm just trying to tell the true story about why I've been upset this year, including addressing counterarguments for why some would argue that I shouldn't be upset, why other people could be said to be behaving 'reasonably' given their incentives, why I nevertheless wish they'd be braver and adhere to principle rather than 'reasonably' following incentives, _&c_." diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index 89391c1..f5d80af 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -11,15 +11,25 @@ _ clear with Steven _ finish and ship "Reply to Scott on Autogenderphilia" _ finish and ship "Hrunkner Unnerby" _ clear with Michael/Ben/Jessica +_ consult Said? _ clear with Alicorn _ clear with Kelsey _ clear with Ray _ clear with Ruby _ SHIP PT. 3!! ------------ -_ address auto edit tier to pt. 4–5 -_ red team pt. 4–5 -_ pro edit pt. 4–5 +- address auto edit tier to pt. 4 +_ solicit red team pt. 4 +_ solicit pro edit pt. 4 +_ apply red team pt. 4 +_ apply pro edit pt. 4 + +_ address auto edit tier to pt. 5 +_ solicit red team pt. 5 +_ solicit pro edit pt. 5 +_ apply red team pt. 5 +_ apply pro edit pt. 5 + _ consult lc _ psychiatric disaster private doc @@ -36,6 +46,8 @@ pt. 3 edit tier (auto edition)— ✓ do I have a better identifier than "Vassarite" ✓ log2 with an 'o' ✓ yank note about LW comment policy to top-level +✓ "Common Interest of Many Causes" and "Kolmogorov Complicity" offer directly contradictory strategies +_ re-check wording of trans-kids-on-the-margin section _ briefly speculate on causes of brain damage in footnote? ---- _ Ruby fight included ban threat, "forces of blandness want me gone ... stand my ground" remark @@ -60,19 +72,26 @@ pt. 4 edit tier— ✓ "Ideology Is Not the Movement" mentions not misgendering ✓ mention Nick Bostrom email scandal (and his not appearing on the one-sentence CAIS statement) ✓ explain why he could think of some relevant differences -_ body odors comment -_ emphasize that 2018 thread was policing TERF-like pronoun usage, not just disapproving of gender-based pronouns +✓ rephrase "gamete size" discussion to make it clearer that Yudkowsky's proposal also implicitly requires people to be agree about the clustering thing +✓ honesty and humility, HEXACO +✓ GreaterWrong over Less Wrong for comment links + +- ending qualifications on "fraud" and whether it might be a good idea + _ if you only say good things about Republican candidates -_ to-be-continued ending about how being a fraud might be a good idea -_ selective argumentation that's clearly labeled as such would be fine -_ cite more sneers; use a footnote to pack in as many as possible + _ Litany Against Gurus, not sure humans can think and trust at the same time; High Status and Stupidity +_ body odors comment _ when EY put a checkmark on my Discord message characterizing his strategy as giving up on intellectual honesty -_ honesty and humility, HEXACO -_ rephrase "gamete size" discussion to make it clearer that Yudkowsky's proposal also implicitly requires people to be agree about the clustering thing -_ "Common Interest of Many Causes" and "Kolmogorov Complicity" offer directly contradictory strategies + +_ selective argumentation that's clearly labeled as such would be fine +--- +_ emphasize that 2018 thread was policing TERF-like pronoun usage, not just disapproving of gender-based pronouns _ https://cognition.cafe/p/on-lies-and-liars -_ GreaterWrong over Less Wrong for comment links +_ cite more sneers; use a footnote to pack in as many as possible + + + pt. 5 edit tier— _ sucking Scott's dick is helpful because he's now the main gateway instead of HPMOR @@ -116,7 +135,6 @@ _ "Riley" pointing out that it worked better because it was Oli _ mention Michael's influence and South Park recs in late 2016? _ GreaterWrong over Less Wrong for comment links - things to discuss with Michael/Ben/Jessica— _ Anna on Paul Graham _ Yudkowsky thinking reasoning wasn't useful @@ -128,7 +146,7 @@ _ Michael's SLAPP against REACH (new) _ Michael on creepy and crazy men (new) _ elided Sasha disaster (new) _ what should I say to Iceman? -_ "yelling" + pt. 3–5 prereaders— @@ -638,7 +656,7 @@ You want concise? Meghan Murphy got it down to four words, which could have been Men aren't women! Men aren't women tho! -It's only _after_ people pretended to disagree with _that_, that I started recusively digging into the details—why adult human males on hormone replacement therapy still aren't adult human females, why 'adult human male' and 'adult human female' are natural categories that it makes sense for Meghan McCarthy to want short codewords to point to, how words can be used in different ways depending on context (such that we can understand Meghan McCarthy's claim as it was meant, even if the words _woman_ and _man_ can also be used in other senses), what it means for something to be a natural cateogry, what it means to say that _X_'s aren't _Y_'s ... +It's only _after_ people pretended to disagree with _that_, that I started recusively digging into the details—why adult human males on hormone replacement therapy still aren't adult human females, why 'adult human male' and 'adult human female' are natural categories that it makes sense for Meghan Murphy to want short codewords to point to, how words can be used in different ways depending on context (such that we can understand Meghan Murphy's claim as it was meant, even if the words _woman_ and _man_ can also be used in other senses), what it means for something to be a natural cateogry, what it means to say that _X_'s aren't _Y_'s ... It is more than a little insulting to be told, after all this, that the problem is that _I_ don't know how to come to a point, rather than everyone in Berkeley not knowing how to accept a point that contradicts their religion. It's such a stupidly simple stonewalling strategy: when the critic makes their point simply (men aren't women! men aren't women tho!), sneer at them for being ontologically confused, and then when they spend years of their life writing up the exhaustively detailed rigorous version, sneer at them for not being able to come to a point. @@ -2132,7 +2150,7 @@ Bostrom's apology for an old email—who is this written for?? Why get ahead, wh https://twitter.com/ESYudkowsky/status/1404697716689489921 > I have never in my own life tried to persuade anyone to go trans (or not go trans)—I don't imagine myself to understand others that much. -If you think it "sometimes personally prudent and not community-harmful" to go out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you "don't see what the alternative is besides getting shot"), you can see why people might regard you as a _Republican shill_—even if all the things you said were true, and even if you never told any specific individual, "You should vote Republican." +If you think it "sometimes personally prudent and not community-harmful" to go out of your way to say positive things about Republican candidates and never, ever say positive things about Democratic candidates (because you "don't see what the alternative is besides getting shot"), you can see why people might regard you as a Republican shill—even if all the things you said were true, and even if you never told any specific individual, "You should vote Republican." https://www.facebook.com/yudkowsky/posts/10154110278349228 > Just checked my filtered messages on Facebook and saw, "Your post last night was kind of the final thing I needed to realize that I'm a girl." @@ -2813,7 +2831,7 @@ I could even forgive him for subsequently taking a shit on e4 of my chessboard ( But if he's _then_ going to take a shit on c3 of my chessboard (["important things [...] would be all the things I've read [...] from human beings who are people—describing reasons someone does not like to be tossed into a Male Bucket or Female Bucket, as it would be assigned by their birth certificate", "the simplest and best protocol is, '"He" refers to the set of people who have asked us to use "he"'"](https://www.facebook.com/yudkowsky/posts/10159421750419228)), the "playing on a different chessboard, no harm intended" excuse loses its credibility. The turd on c3 is a pretty big likelihood ratio! (That is, I'm more likely to observe a turd on c3 in worlds where Yudkowsky _is_ playing my chessboard and wants me to lose, than in world where he's playing on a different chessboard and just _happened_ to take a shit there, by coincidence.) -(The authors of the [HEXACO personality model](https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure) may have gotten something importantly right in [grouping "honesty" and "humility" as a single factor](https://en.wikipedia.org/wiki/Honesty-humility_factor_of_the_HEXACO_model_of_personality).) + ------ @@ -2837,6 +2855,8 @@ From my perspective, such advice would be missing the point. [I'm not trying to I don't, actually, expect people to spontaneously blurt out everything they believe to be true, that Stalin would find offensive. "No comment" would be fine. Even selective argumentation that's clearly labeled as such would be fine. (There's no shame in being an honest specialist who says, "I've mostly thought about these issues though the lens of ideology _X_, and therefore can't claim to be comprehensive; if you want other perspectives, you'll have to read other authors and think it through for yourself.") +The problem is with selective argumentation that falsely claims to be complete. + ----- diff --git a/notes/memoir_wordcounts.csv b/notes/memoir_wordcounts.csv index b7cc2b8..770d0d4 100644 --- a/notes/memoir_wordcounts.csv +++ b/notes/memoir_wordcounts.csv @@ -584,4 +584,5 @@ 11/21/2023,117366,170 11/22/2023,117366,0 11/23/2023,117338,-28 -11/24/2023,, \ No newline at end of file +11/24/2023,117732,394 +11/25/2023,, \ No newline at end of file diff --git a/notes/tweet_pad.txt b/notes/tweet_pad.txt index 5a19887..1d51833 100644 --- a/notes/tweet_pad.txt +++ b/notes/tweet_pad.txt @@ -1,8 +1,4 @@ - - -Yudkowsky was not consistently candid in his communications with his followers, hindering their ability to exercise their responsibilities - - +If you only want to read one of the 20K-word posts in this sequence, I'd skip this one and return next week for the part where I explain how Eliezer Yudkowsky has not been consistently candid in his communications with his followers, hindering their ability to exercise their responsibilities. Post later (can't afford to spend more Twitter time now)— -- 2.17.1