From 661ab7c3a6944d37284999607ae6458bfdf94dba Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sat, 13 Feb 2021 21:00:21 -0800 Subject: [PATCH] Saturday drafting "Sexual Dimorphism" (session 5) --- ...ences-in-relation-to-my-gender-problems.md | 26 +++++++++---------- notes/notes.txt | 2 ++ ...exual-dimorphism-in-the-sequences-notes.md | 2 -- notes/sexual-dimorphism-marketing.md | 2 +- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md index 582fe31..f7d7e1d 100644 --- a/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md +++ b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md @@ -618,37 +618,37 @@ In 2008, this very general philosophy of language lesson was _not politically co There is a _sense in which_ one might say that you "can" define a word any way you want. That is: words don't have intrinsic ontologically-basic meanings. We can imagine an alternative world where people spoke a language that was _like_ the English of our world, except that they use the word "tree" to refer to members of the empirical entity-cluster that we call "dogs" and _vice versa_, and it's hard to think of a meaningful sense in which one convention is "right" and the other is "wrong". -But there's also an important _sense in which_ we want to say that you "can't" define a word any way you want. That is: some ways of using words work better for transmitting information from one place to another. It would be harder to explain your observations from a trip to the local park in a language that used the word "tree" to refer to members of _either_ of the empirical entity-clusters that we call "dogs" and "trees", because grouping together things that aren't relevantly similar like that makes it harder to describe differences between the wagging-animal-trees and the leafy-plant-trees. - -If you want to teach people about the philosophy of language, you want to convey _both_ of these lessons, against naïve essentialism, _and_ against naïve anti-essentialism. If the people who are widely recognized and trusted as the leaders of the systematically-correct-reasoning community _selectively_ teach _only_ the words-don't-have-intrinsic-ontologically-basic-meanings part when the topic at hand happens to be trans issues (because talking about the carve-reality-at-the-joints part would be [politically suicidal](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting)), then people who trust the leaders are likely to get the wrong idea about how the philosophy of language works—even if [the selective argumentation isn't _conscious_](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) and [even if every individual sentence they say is true](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). - +But there's also an important _sense in which_ we want to say that you "can't" define a word any way you want. That is: some ways of using words work better for transmitting information from one place to another. It would be harder to explain your observations from a trip to the local park in a language that used the word "tree" to refer to members of _either_ of the empirical entity-clusters that the English of our world calls "dogs" and "trees", because grouping together things that aren't relevantly similar like that makes it harder to describe differences between the wagging-animal-trees and the leafy-plant-trees. +If you want to teach people about the philosophy of language, you should want to convey _both_ of these lessons, against naïve essentialism, _and_ against naïve anti-essentialism. If the people who are widely respected and trusted [(almost worshipped)](https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts) as the leaders of the systematically-correct-reasoning community, [_selectively_](https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people) teach _only_ the words-don't-have-intrinsic-ontologically-basic-meanings part when the topic at hand happens to be trans issues (because talking about the carve-reality-at-the-joints part would be [politically suicidal](https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting)), then people who trust the leaders are likely to get the wrong idea about how the philosophy of language works—even if [the selective argumentation isn't _conscious_ or deliberative](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie) and [even if every individual sentence they say permits a true interpretation](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). (As it is written of the fourth virtue of evenness, ["If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider."](https://www.yudkowsky.net/rational/virtues)) +[TODO: contrast "... Not Man for the Categories" to "Against Lie Inflation"; +Scott has written exhaustively about the dangers of strategic equivocation ("Worst Argument", "Brick in the Motte"); insofar as I can get a _coherent_ posiiton out of the conjunction of "... for the Categories" and Scott's other work, it's that he must think strategic equivocation is OK if it's for being nice to people +https://slatestarcodex.com/2019/07/16/against-lie-inflation/ +] + _Was_ it a "political" act for me to write about the cognitive function of categorization on the robot-cult blog with non-gender examples, when gender was secretly ("secretly") my _motivating_ example? In some sense, I guess? But if so, the thing you have to realize is— -_Everyone else shot first_. The timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My robot-cult philosophy of language blogging (April 2019–January 2021) was a (stealthy) _response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks). +_Everyone else shot first_. The timestamps back me up here: my ["... To Make Predictions"](/2018/Feb/the-categories-were-made-for-man-to-make-predictions/) (February 2018) was a _response to_ Alexander's ["... Not Man for the Categories"](https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/) (November 2014). My philosophy-of-language work on the robot-cult blog (April 2019–January 2021) was (stealthily) _in response to_ Yudkowsky's November 2018 Twitter thread. When I started trying to talk about autogynephilia with all my robot cult friends in 2016, I _did not expect_ to get dragged into a multi-year philosophy-of-language crusade! That was just _one branch_ of the argument-tree that, once begun, I thought should be easy to _definitively settle in public_ (within our robot cult, whatever the _general_ public thinks). +I guess by now the branch is as close to settled as it's going to get? Alexander ended up [adding an edit note to the end of "... Not Man to the Categories" in December 2019](TODO when I have internet privs again: archive.is direct-paragraph linky), and Yudkowsky would [clarify his position on the philosophy of language in September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228). So, that's nice, I guess. I will confess to being rather disappointed that the public argument-tree evaluation didn't get much further, much faster? The thing you have understand about this whole debate is— +_I need the correct answer in order to decide whether or not to cut my dick off_. As I've said, I _currently_ believe that cutting my dick off would be a _bad_ idea. But that's cost–benefit judgement call based on many _contingent, empirical_ beliefs about the world. I'm obviously in the general _reference class_ of males who are getting their dicks cut off these days, and a lot of them seem to be pretty happy about it! I would be much more likely to go through with transitioning if I believed different things about the world—if I thought my beautiful pure sacred self-identity thing were an intersex condition, if I still believed in my teenage psychological sex differences denialism (such that there would be _axiomatically_ no worries about fitting with "other" women after transitioning), if I were more optimistic about the degree to which HRT and surgeries approximate an actual sex change, _&c._ -[TODO: -I got a little bit of pushback due to the perception + +[TODO: _I need the right answer in order to decide whether or not to cut my dick off_—if I were dumb enough to believe Yudkowsky's insinuation that pronouns don't have truth conditions, I might have made a worse decision If rationality is useful for anything, it should be useful for practical life decisions like this -the hypocrisy of "Against Lie Inflation" - -(Note that Yudkowsky [would later clarify his position in September 2020](https://www.facebook.com/yudkowsky/posts/10158853851009228).) - ] - Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_. @@ -663,7 +663,7 @@ Someone asked me: "If we randomized half the people at [OpenAI](https://openai.c But the thing I'm objecting to is a lot more fundamental than the specific choice of pronoun convention, which obviously isn't going to be uniquely determined. Turkish doesn't have gender pronouns, and that's fine. Naval ships traditionally take feminine pronouns in English, and it doesn't confuse anyone into thinking boats have a womb. [Many other languages are much more gendered than English](https://en.wikipedia.org/wiki/Grammatical_gender#Distribution_of_gender_in_the_world's_languages) (where pretty much only third-person singular pronouns are at issue). The conventions used in one's native language probably _do_ [color one's thinking to some extent](/2020/Dec/crossing-the-line/)—but when it comes to that, I have no reason to expect the overall design of English grammar and vocabulary "got it right" where Spanish or Russian "got it wrong." -What matters isn't the specific object-level choice of pronoun or bathroom conventions; what matters is having a culture where people _viscerally care_ about minimizing the expected squared error of our probabilistic predictions, even if it hurts someone's feelings. +What matters isn't the specific object-level choice of pronoun or bathroom conventions; what matters is having a culture where people _viscerally care_ about minimizing the expected squared error of our probabilistic predictions, even at the expense of people's feelings—_especially_ at the expense of people's feelings. I think looking at [our standard punching bag of theism](https://www.lesswrong.com/posts/dLL6yzZ3WKn8KaSC3/the-uniquely-awful-example-of-theism) is a very fair comparison. Religious people aren't _stupid_. You can prove theorems about the properties of [Q-learning](https://en.wikipedia.org/wiki/Q-learning) or [Kalman filters](https://en.wikipedia.org/wiki/Kalman_filter) at a world-class level without encountering anything that forces you to question whether Jesus Christ died for our sins. But [beyond technical mastery of one's narrow specialty](https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory), there's going to be some competence threshold in ["seeing the correspondence of mathematical structures to What Happens in the Real World"](https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai) that _forces_ correct conclusions. I actually _don't_ think you can be a believing Christian and invent [the concern about consequentialists embedded in the Solomonoff prior](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/). diff --git a/notes/notes.txt b/notes/notes.txt index 40c26ca..45f54a4 100644 --- a/notes/notes.txt +++ b/notes/notes.txt @@ -2396,3 +2396,5 @@ https://archive.li/pmuhQ https://gcacademianetwork.weebly.com/ https://patch.com/illinois/evanston/northwestern-neuroscientist-found-dead-after-racism-allegations + +https://www.scientificamerican.com/article/its-time-to-take-the-penis-off-its-pedestal/ diff --git a/notes/sexual-dimorphism-in-the-sequences-notes.md b/notes/sexual-dimorphism-in-the-sequences-notes.md index 87441ca..a4c8008 100644 --- a/notes/sexual-dimorphism-in-the-sequences-notes.md +++ b/notes/sexual-dimorphism-in-the-sequences-notes.md @@ -67,8 +67,6 @@ NYT hit piece https://archive.is/0Ghdl https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie -https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts - ------ diff --git a/notes/sexual-dimorphism-marketing.md b/notes/sexual-dimorphism-marketing.md index a9b79f6..8893fd0 100644 --- a/notes/sexual-dimorphism-marketing.md +++ b/notes/sexual-dimorphism-marketing.md @@ -21,7 +21,7 @@ https://www.facebook.com/zmdavis/posts/10154812970895199?comment_id=101548138260 "I'm choosing to have a public Facebook meltdown now, and in two or three years I'll have the full version on my blog" -It actually took four years. Sorry. +But, it actually took four years. Sorry. ----- -- 2.17.1