From 86800a486ee8191d36a13003535ccd2943983d8f Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Mon, 15 Feb 2021 01:10:43 -0800 Subject: [PATCH] caliph --- ...quences-in-relation-to-my-gender-problems.md | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md index 61713e7..86197d9 100644 --- a/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md +++ b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md @@ -622,7 +622,7 @@ If you want to teach people about the philosophy of language, you should want to _Was_ it a "political" act for me to write about the cognitive function of categorization on the robot-cult blog with non-gender examples, when gender was secretly ("secretly") my _motivating_ example? In some sense, I guess? But if so, the thing you have to realize is— -_Everyone else shot first_. The timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My philosophy-of-language work on the robot-cult blog (April 2019–January 2021) was (stealthily) _in response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks). +_Everyone else shot first_. That's not just my subjective perspective; the timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My philosophy-of-language work on the robot-cult blog (April 2019–January 2021) was (stealthily) _in response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks). I guess by now the branch is as close to settled as it's going to get? Alexander ended up [adding an edit note to the end of "... Not Man to the Categories" in December 2019](https://archive.is/1a4zV#selection-805.0-817.1), and Yudkowsky would [go on to clarify his position on the philosophy of language in September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228). So, that's nice. But I will confess to being quite disappointed that the public argument-tree evaluation didn't get much further, much faster? The thing you have understand about this whole debate is— @@ -630,7 +630,7 @@ _I need the correct answer in order to decide whether or not to cut my dick off_ In that November 2018 Twitter thread, [Yudkowsky wrote](https://archive.is/y5V9i): -> _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act." +> _Even if_ somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act. This seems to suggest that gender pronouns in the English language as currently spoken don't have effective truth conditions. I think this is false _as a matter of cognitive science_. If someone told you, "Hey, you should come meet my friend at the mall, she is really cool and I think you'll like her," and then the friend turned out to look like me (as I am now), _you would be surprised_. (Even if people in Berkeley would socially punish you for _admitting_ that you were surprised.) The "she ... her" pronouns would prompt your brain to _predict_ that the friend would appear to be female, and that prediction would be _falsified_ by someone who looked like me (as I am now). Pretending that the social-norms dispute is about chromosomes was a _bullshit_ [weakmanning](https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) move on the part of Yudkowsky, [who had once written that](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence) "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates[;] [a]rguing against weaker advocates proves _nothing_, because even the strongest idea will attract weak advocates." Thanks to the skills I learned from Yudkowsky's _earlier_ writing, I wasn't dumb enough to fall for it, but we can imagine someone otherwise similar to me who was, who might have thereby been misled into making worse life decisions. @@ -638,7 +638,7 @@ If this "rationality" stuff is useful for _anything at all_, you would _expect_ In order to get the _right answer_ to that policy question (whatever the right answer turns out to be), you need to _at minimum_ be able to get the _right answer_ on related fact-questions like "Is late-onset gender dysphoria in males an intersex condition?" (answer: no) and related philosophy-questions like "Can we arbitrarily redefine words such as 'woman' without adverse effects on our cognition?" (answer: no). -At the cost of _wasting three years of my life_, we _did_ manage to get the philosophy question right! Again, that's nice. But compared to the Sequences-era dreams of changing the world with a Second Scientific Revolution, it's too little, too slow, too late. If our public discourse is going to be this aggressively optimized for _tricking me into cutting my dick off_ (independently of the empirical cost–benefit trade-off determining whether or not I should cut my dick off), that kills the whole project for me. I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? +At the cost of _wasting three years of my life_, we _did_ manage to get the philosophy question right! Again, that's nice. But compared to the [Sequences-era dreams of changing the world](https://www.lesswrong.com/posts/YdcF6WbBmJhaaDqoD/the-craft-and-the-community), it's too little, too slow, too late. If our public discourse is going to be this aggressively optimized for _tricking me into cutting my dick off_ (independently of the empirical cost–benefit trade-off determining whether or not I should cut my dick off), that kills the whole project for me. I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here? Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" @@ -648,12 +648,21 @@ If you're doing systematically correct reasoning, you should be able to get the If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all. -In ["The Ideology Is Not the Movement"](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/) (April 2016), Alexander describes how subcultures typically diverge from the ideological "rallying flag" that they formed around. [Sunni and Shia Islam](https://en.wikipedia.org/wiki/Shia%E2%80%93Sunni_relations) originally, ostensibly diverged on the question of who should succeed Muhammad as caliph, but modern-day Sunni and Shia who hate each other's guts aren't actually re-litigating a succession dispute from the 7th century C.E.; rather, pre-existing divergent social-group tendencies crystalized into distinct tribes by latching on to the succession dispute as a [simple membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests). +In ["The Ideology Is Not the Movement"](https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/) (April 2016), Alexander describes how the content of subcultures typically departs from the ideological "rallying flag" that they formed around. [Sunni and Shia Islam](https://en.wikipedia.org/wiki/Shia%E2%80%93Sunni_relations) originally, ostensibly diverged on the question of who should rightfully succeed Muhammad as caliph, but modern-day Sunni and Shia who hate each other's guts aren't actually re-litigating a succession dispute from the 7th century C.E. Rather, pre-existing divergent social-group tendencies crystalized into distinct tribes by latching on to the succession dispute as a [simple membership test](https://www.lesswrong.com/posts/edEXi4SpkXfvaX42j/schelling-categories-and-simple-membership-tests). +Alexander jokingly identifies the identifying feature of our robot cult as being the belief that "Eliezer Yudkowsky is the rightful caliph": the Sequences were a rallying flag that brought together a lot of like-minded people to form a subculture with its own ethos and norms—among which Alexander includes "don't misgender trans people"—but the subculture emerged as its own entity that isn't necessarily _about_ anything outside itself. + +No one seemed to notice at the time, but this characterization of our movement is actually a _declaration of failure_. + + +Hence, "robot cult." [TODO: risk factor of people getting drawn in to a subculture that claims to be about reasoning, but is actualy very heavily optimized for cutting boys dicks off. "The Ideology Is Not the Movement" is very explicit about this!! People use trans as political cover; no one seemed to notice that "The Ideology Is Not the Movement" is a declaration of _failure_ http://benjaminrosshoffman.com/construction-beacons/ +https://srconstantin.github.io/2017/08/08/the-craft-is-not-the-community.html I'm worried about the failure mode where the awesomeness of the Sequences +caliphate +> Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like Eliezer and you rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs most of their worldview (plus noise and conformity with the local environment). So for people who didn't [win the talent lottery](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the Zeitgeist, the winning move is "persuade Scott Alexander". ] -- 2.17.1