X-Git-Url: http://unremediatedgender.space/source?p=Ultimately_Untrue_Thought.git;a=blobdiff_plain;f=notes%2Fsexual-dimorphism-in-the-sequences-notes.md;h=bb773fe8375830fbe2166f9c5536937d6a268341;hp=e54227d2cfcd69e39bfc3bee19124503a0bf6e08;hb=1f24b735d45981d198deb8957af2311ce76eb2aa;hpb=37d08e7720aa5695529b37556e7c3d9a582dbacd diff --git a/notes/sexual-dimorphism-in-the-sequences-notes.md b/notes/sexual-dimorphism-in-the-sequences-notes.md index e54227d..bb773fe 100644 --- a/notes/sexual-dimorphism-in-the-sequences-notes.md +++ b/notes/sexual-dimorphism-in-the-sequences-notes.md @@ -128,7 +128,13 @@ inference by analogy—even if not all trans women are exactly like me, at least https://www.lesswrong.com/posts/XYCEB9roxEBfgjfxs/the-scales-of-justice-the-notebook-of-rationality writes down all the facts that aren't on anyone's side. -"gay and trans" +In the political world, "gay and trans"—the identity-modifiers "stack". + +Etiologically, people who are "gay and trans" are ... straight. + + +not needing permission from another person + ------ @@ -142,7 +148,7 @@ writes down all the facts that aren't on anyone's side. https://archive.is/7Wolo -> the massive correlation between exposure to Yudkowsky’s writings and being a trans woman (can’t bother to do the calculations but the connection is absurdly strong) +> the massive correlation between exposure to Yudkowsky's writings and being a trans woman (can't bother to do the calculations but the connection is absurdly strong) Namespace's point about the two EYs link back to Murray review: can't oppress people on the basis of sex if sex _doesn't exist_ @@ -153,7 +159,7 @@ https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me If we _actually had_ the magical perfect sex change technology described in "Changing Emotions"—if it cost $200,000, I would take out a bank loan and _do it_, and live happily ever after. -(Though I'd call myself a transwoman—one word, for the same reason the _verthandi_ in "Failed Utopia #4-2" got their own word. I currently write "trans woman", two words, as a strategic concession to the shibboleth-detectors of my target audience:[^two-words] I don't want to to _prematurely_ scare off progressive-socialized readers on account of mere orthography, when what I actually have to say is already disturbing enough.) + people like me being incentivized to identify as part of a political pressure group that attempts to leverage claims of victimhood into claims on power @@ -285,15 +291,9 @@ If we _actually had_ the magical perfect sex change technology described in "Cha I definitely don't want to call (say) my friend "Irene" a man. That would be crazy! Because **her transition _actually worked_.** Because it actually worked _on the merits_. _Not_ because I'm _redefining concepts in order to be nice to her_. When I look at her, whatever algorithm my brain _ordinarily_ uses to sort people into "woman"/"man"/"not sure" buckets, returns "woman." -Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" - -But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_. -**If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_.** Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. In the same way, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. -The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2019 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity as a cost), also can't get heliocentrism right in 1632 _for the same reason_—and I really doubt it can get AI alignment theory right in 2039. -If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all. Perhaps so. But back in 2009, **we did not anticipate that _whether or not I should cut my dick off_ would _become_ a politicized issue.** @@ -304,3 +304,33 @@ I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts [cruelty to ordinary people, optimized to confuse and intimidate people trying to use language to reason about the concept of biological sex] https://medium.com/@barrakerr/pronouns-are-rohypnol-dbcd1cb9c2d9 + + + +We want words that map onto the structure of things in the word: if everyone were a hermaphrodite + + + + +Someone asked me: "Wouldn't it be embarrassing if the community solved Friendly AI and went down in history as the people who created Utopia forever, and you had rejected it because of gender stuff?" + +But the _reason_ it seemed _at all_ remotely plausible that our little robot cult could be pivotal in creating Utopia forever was _not_ "[Because we're us](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/), the world-saving good guys", but rather _because_ we were going to discover and refine the methods of _systematically correct reasoning_. + +If you're doing systematically correct reasoning, you should be able to get the right answer even when the question _doesn't matter_. Obviously, the safety of the world does not _directly_ depend on being able to think clearly about trans issues. Similarly, the safety of a coal mine for humans does not _directly_ depend on [whether it's safe for canaries](https://en.wiktionary.org/wiki/canary_in_a_coal_mine): the dead canaries are just _evidence about_ properties of the mine relevant to human health. (The causal graph is the fork "canary-death ← mine-gas → human-danger" rather than the direct link "canary-death → human-danger".) + +The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2020 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity as a cost), also can't get heliocentrism right in 1632 _for the same reason_—and I really doubt it can get AI alignment theory right in 2039. + +If the people _marketing themselves_ as the good guys who are going to save the world using systematically correct reasoning are _not actually interested in doing systematically correct reasoning_ (because systematically correct reasoning leads to two or three conclusions that are politically "impossible" to state clearly in public, and no one has the guts to [_not_ shut up and thereby do the politically impossible](https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible)), that's arguably _worse_ than the situation where "the community" _qua_ community doesn't exist at all. + +Someone asked me: "If we randomized half the people at [OpenAI](https://openai.com/) to use trans pronouns one way, and the other half to use it the other way, do you think they would end up with significantly different productivity?" + +[it's not about pronouns; it's about a culture where people have property rights over other people's models of them] + + + +I think the comparison to [our standard punching bag of theism](https://www.lesswrong.com/posts/dLL6yzZ3WKn8KaSC3/the-uniquely-awful-example-of-theism) is fair. Religious people aren't _stupid_. + + + + +I don't _want_ people to have to doublethink around their perceptions of me