From 999a53e5494a23a3ad349c8b611ec0b5df226d58 Mon Sep 17 00:00:00 2001 From: "M. Taylor Saotome-Westlake" Date: Sun, 28 Mar 2021 09:34:07 -0700 Subject: [PATCH] "Sexual Dimorphism" drafting and shoveling MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit Momentum is the key! After showering, I made sure to do a block of Mnemonsyne reviews (I'm so far behind!) and then two writing blocks—and even if it wasn't very much net text, I think it's set the psychological context for the day, such that I'm allowed to reward myself with Starbucks now (rather than foolishly putting that first). --- ...-hill-of-validity-in-defense-of-meaning.md | 9 ++++- content/drafts/a-river-in-egypt.md | 3 -- ...ences-in-relation-to-my-gender-problems.md | 6 ++-- ...exual-dimorphism-in-the-sequences-notes.md | 34 ++++++++++++------- notes/sexual-dimorphism-marketing.md | 4 +-- 5 files changed, 35 insertions(+), 21 deletions(-) diff --git a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md index 43a854f..f9dc42f 100644 --- a/content/drafts/a-hill-of-validity-in-defense-of-meaning.md +++ b/content/drafts/a-hill-of-validity-in-defense-of-meaning.md @@ -183,11 +183,13 @@ It worked once, right? (Picture me playing Hermione Granger in a post-Singularity [holonovel](https://memory-alpha.fandom.com/wiki/Holo-novel_program) adaptation of _Harry Potter and the Methods of Rationality_ (Emma Watson having charged me [the standard licensing fee](/2019/Dec/comp/) to use a copy of her body for the occasion): "[We can do anything if we](https://www.hpmor.com/chapter/30) exert arbitrarily large amounts of [interpretive labor](https://acesounderglass.com/2015/06/09/interpretive-labor/)!") - > An extreme case in point of "handwringing about the Overton Window in fact constituted the Overton Window's implementation" OK, now apply that to your Kolomogorov cowardice https://twitter.com/ESYudkowsky/status/1373004525481598978 +The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041. + +Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about—like, say, the role of Bayesian reasoning in the philosophy of language. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) of [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work. https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainstream-academia?commentId=6GD86zE5ucqigErXX @@ -197,3 +199,8 @@ https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainst http://www.hpmor.com/chapter/47 https://www.hpmor.com/chapter/97 > one technique was to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited. + + +> At least, I have a MASSIVE home territory advantage because I can appeal to Eliezer's writings from 10 years ago, and ppl can't say "Eliezer who? He's probably a bad man" + +> Makes sense... just don't be shocked if the next frontier is grudging concessions that get compartmentalized diff --git a/content/drafts/a-river-in-egypt.md b/content/drafts/a-river-in-egypt.md index 32dadfb..f7f94b5 100644 --- a/content/drafts/a-river-in-egypt.md +++ b/content/drafts/a-river-in-egypt.md @@ -150,9 +150,6 @@ Of course, this is kind of a niche topic—if you're not a male with this psycho Men who fantasize about being women do not particularly resemble actual women! We just—don't? This seems kind of obvious, really? _Telling the difference between fantasy and reality_ is kind of an important life skill?! Notwithstanding that some males might want to make use of medical interventions like surgery and hormone replacement therapy to become facsimiles of women as far as our existing technology can manage, and that a free and enlightened transhumanist Society should support that as an option—and notwithstanding that _she_ is obviously the correct pronoun for people who _look_ like women—it's probably going to be harder for people to figure out what the optimal decisions are if no one is allowed to use language like "actual women" that clearly distinguishes the original thing from imperfect facsimiles?! -The "discourse algorithm" (the collective generalization of "cognitive algorithm") that can't just _get this shit right_ in 2021 (because being out of step with the reigning Bay Area ideological fashion is deemed too expensive by a consequentialism that counts unpopularity or hurt feelings as costs), also [can't get heliocentrism right in 1633](https://en.wikipedia.org/wiki/Galileo_affair) [_for the same reason_](https://www.lesswrong.com/posts/yaCwW8nPQeJknbCgf/free-speech-and-triskaidekaphobic-calculators-a-reply-to)—and I really doubt it can get AI alignment theory right in 2041. - -Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about—like, say, the role of Bayesian reasoning in the philosophy of language. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) of [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work. [...] diff --git a/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md index afc5440..5832bd7 100644 --- a/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md +++ b/content/drafts/sexual-dimorphism-in-the-sequences-in-relation-to-my-gender-problems.md @@ -128,7 +128,7 @@ If you're willing to admit to the possibility of psychological sex differences _ I guess if you _didn't_ grow up with a quasi-religious fervor for psychological sex differences denialism, this whole theoretical line of argument about evolutionary psychology doesn't seem world-shatteringly impactful?—maybe it just looks like supplementary Science Details brushed over some basic facts of human existence that everyone knows. But if you _have_ built your identity around [quasi-religious _denial_](/2020/Apr/peering-through-reverent-fingers/) of certain basic facts of human existence that everyone knows (if not everyone [knows that they know](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief)), getting forced out of it by sufficient weight of Science Details [can be a pretty rough experience](https://www.greaterwrong.com/posts/XM9SwdBGn8ATf8kq3/c/comment/Zv5mrMThBkkjDAqv9). -My hair-trigger antisexism was sort of lurking in the background of some of my comments while the Sequences were being published (though, again, it wasn't relevant to _most_ posts, which were just about cool math and science stuff that had no avenue whatsoever for being corrupted by gender politics). The term "social justice warrior" wasn't yet popular, but I definitely had a SJW-alike mindset (nurtured from my time lurking the feminist blogosphere) of being preoccupied with the badness and wrongness of people who are wrong and bad (_i.e._, sexist), rather than trying to [minimize the expected squared error of my probabilistic predictions](https://www.lesswrong.com/posts/onwgTH6n8wxRSo2BJ/unnatural-categories-are-optimized-for-deception). +My hair-trigger antisexism was sort of lurking in the background of some of my comments while the Sequences were being published (though, again, it wasn't relevant to _most_ posts, which were just about cool math and science stuff that had no avenue whatsoever for being corrupted by gender politics). The term "social justice warrior" wasn't yet popular, but I definitely had a SJW-alike mindset (nurtured from my time lurking the feminist blogosphere) of being preoccupied with the badness and wrongness of people who are wrong and bad (_i.e._, sexist), rather than trying to maximize the accuracy of my probabilistic predictions. Another one of the little song-fragments I wrote in my head a few years earlier (which I mention for its being representative of my attitude at the time, rather than it being notable in itself), concerned an advice columnist, [Amy Alkon](http://www.advicegoddess.com/), syndicated in the _Contra Costa Times_ of my youth, who would sometimes give dating advice based on a pop-evopsych account of psychological sex differences—the usual fare about women seeking commitment and men seeking youth and beauty. My song went— @@ -140,7 +140,7 @@ Another one of the little song-fragments I wrote in my head a few years earlier > Because the world's not girls and guys > Cave men and women fucking 'round the fire in the night_ -Looking back with the outlook later acquired from my robot cult, this is abhorrent. You don't _casually wish death_ on someone just because you disagree with their views on psychology! Even if it wasn't in a spirit of personal malice (this was a song I sung to myself, not an actual threat directed to Amy Alkon's inbox), the sentiment just _isn't done_. But at the time, I _didn't notice there was anything wrong with my song_. I hadn't yet been socialized into the refined ethos of "False ideas should be argued with, but heed that we too may have ideas that are false". +Looking back with the outlook later acquired from my robot cult, this is _abhorrent_. You don't _casually wish death_ on someone just because you disagree with their views on psychology! (Also, casually wishing death on a woman does not seem particularly pro-feminist?!) Even if it wasn't in a spirit of personal malice (this was a song I sung to myself, not an actual threat directed to Amy Alkon's inbox), the sentiment just _isn't done_. But at the time, I _didn't notice there was anything wrong with my song_. I hadn't yet been socialized into the refined ethos of "False ideas should be argued with, but heed that we too may have ideas that are false". [TODO: Me pretending to be dumb about someone not pretending to be dumb about my initials https://www.overcomingbias.com/2008/04/inhuman-rationa.html ; contrast that incident (it's not an accident that he guessed right) to Yudkowsky: "I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong." (https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance)] @@ -334,7 +334,7 @@ If you don't have the conceptual vocabulary to say, "I have a lot of these beaut (As Yudkowsky [occasionally](https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences) [remarks](https://www.lesswrong.com/posts/f4RJtHBPvDRJcCTva/when-anthropomorphism-became-stupid), our _beliefs about_ how our minds work have very little impact on how they actually work. Aristotle thought the brain was an organ for cooling the blood, but he was just wrong; the theory did not _become true of him_ because he believed it.) -What theory I end up believing about myself _matters_, because different theories that purport to explain the same facts can make very different predictions about facts not yet observed, or about the effects of interventions. +What theory I end up believing about myself _matters_, because [different theories that purport to explain the same facts](/2021/Feb/you-are-right-and-i-was-wrong-reply-to-tailcalled-on-causality/) can make very different predictions about facts not yet observed, or about the effects of interventions. If I have some objective inner female gender as the result of a brain-intersex condition, then getting on, and _staying_ on, feminizing hormone replacement therapy (HRT) would presumably be a good idea specifically because my brain is designed to "run on" estrogen. But if my beautiful pure sacred self-identity feelings are fundamentally a misinterpretation of misdirected _male_ sexuality, then it's not clear that I _want_ the psychological effects of HRT: if there were some unnatural way to give me a female body (or just more female-_like_) _without_ messing with my internal neurochemistry, that would actually be _desireable_. diff --git a/notes/sexual-dimorphism-in-the-sequences-notes.md b/notes/sexual-dimorphism-in-the-sequences-notes.md index 60987c7..b2dd90a 100644 --- a/notes/sexual-dimorphism-in-the-sequences-notes.md +++ b/notes/sexual-dimorphism-in-the-sequences-notes.md @@ -2,29 +2,40 @@ Resolved: publish "Sexual Dimorphism" soon as just the first part, the political TODO for "Sexual Dimorphism"— -And because the brain and body are an integrated system, people's intuitive sense of [which parts are "me"](https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me) and which parts are "just" "my body" (which can be swapped out without changing who "I" am), may be _much less in touch with reality_ than they'd like to think. -A common theme in female transformation erotica is that sexuality "goes with the body": in these stories, men who have been magically transformed into women, often express positive or negative emotions (depending on the story and the author) about discovering that they're attracted to guys now. +https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me + + +A common theme in female transformation erotica (search for _tg caption blog_ if you want examples) is that sexuality "goes with the body": in these stories, men who have been magically swapped bodies with women, often express excitement or horror (depending on the story and the author) about the discovery that they're attracted to guys now. But how would that work? That experience would be something you'd predict if sexuality was implemented in a separate brain module that could stay with the rest of the body even while the "soul" (the implementation of someone's personality, memory, _&c._) gets swapped out. + +But if the brain isn't actually modularized in that particular way, the magical transformation would have to do a lot more engineering work _ transformation details matter: accents and homonyms, sexual orientation changing emotions/accent fantasies: https://www.greaterwrong.com/posts/wAW4ENCSEHwYbrwtn/other-people-s-procedural-knowledge-gaps/comment/pheakgvLbFndXccXC +-------- + _ morality and culturally-defined values +-------- + _ Vassar clapback anecdote +-------- + _ playing dumb initials anecdote ------ +-------- -_ EY was right about "men need to think about themselves _as men_" -_ AGPs dating each other is the analogue of "Failed Utopia 4-2" (but phrased in a way that's agnostic about -_ more empathic inference: https://www.lesswrong.com/posts/qCsxiojX7BSLuuBgQ/the-super-happy-people-3-8 -_ If I want to stay aligned with women, then figuring out how to do that depends on the facts about actual sex differences; if I want to do the value-exchange suggested in +"The Opposite Sex" (https://web.archive.org/web/20130216025508/http://lesswrong.com/lw/rp/the_opposite_sex/), Yud on "men should think of themselves as men" / "I often wish some men/women would appreciate"] +------------ +_ AGPs dating each other is the analogue of "Failed Utopia 4-2" (but phrased in a way that's agnostic about +_ more empathic inference: https://www.lesswrong.com/posts/qCsxiojX7BSLuuBgQ/the-super-happy-people-3-8 +_ If I want to stay aligned with women, then figuring out how to do that depends on the facts about actual sex differences; if I want to do the value-exchange suggested in * Moral Error and Moral Disagreement @@ -68,7 +79,7 @@ Terminology/vocab to explain before use— * [the autogynephilic analogue of romantic love](/papers/lawrence-becoming_what_we_love.pdf) -[TODO: this denial was in the background in "The Opposite Sex" (https://web.archive.org/web/20130216025508/http://lesswrong.com/lw/rp/the_opposite_sex/), Yud on "men should think of themselves as men" / "I often wish some men/women would appreciate"] + https://www.greaterwrong.com/posts/FBgozHEv7J72NCEPB/my-way/comment/AEZaakdcqySmKMJYj @@ -520,9 +531,8 @@ You might think, This comment in particular is really something— https://www.reddit.com/r/DrWillPowers/comments/jxh3mz/in_this_thread_help_me_and_this_community_come_up/gd0mytr/ +https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me -> At least, I have a MASSIVE home territory advantage because I can appeal to Eliezer's writings from 10 years ago, and ppl can't say "Eliezer who? He's probably a bad man" - -> Makes sense... just don't be shocked if the next frontier is grudging concessions that get compartmentalized +https://www.overcomingbias.com/2021/03/our-default-info-system-status-and-gossip.html -https://www.overcomingbias.com/2021/03/our-default-info-system-status-and-gossip.html \ No newline at end of file +And because the brain and body are an integrated system, people's intuitive sense of [which parts are "me"]() and which parts are "just" "my body" (which can be swapped out without changing who "I" am), may be much less straightforwardly connected with reality than they'd like to think. diff --git a/notes/sexual-dimorphism-marketing.md b/notes/sexual-dimorphism-marketing.md index cf25762..7488343 100644 --- a/notes/sexual-dimorphism-marketing.md +++ b/notes/sexual-dimorphism-marketing.md @@ -25,9 +25,9 @@ But, it's actually looking like it's going to take closer to five years? I've do Here's Part 1 ([TODO] words), about my experiences (as seen through the interpretive lens of Yudkowskian philosophy): [TODO linky] -Then Part 2 can cover the evidence and theory for why I think I'm justified in believing that a substantial majority of trans women are relevantly "like me", even though I don't have introspective access to other people's experiences. +Then Part 2 can cover the evidence and theory for why I think I'm justified in believing that a substantial majority of trans women in Western countries are relevantly "like me", even though I don't have introspective access to other people's experiences. (Including going into the hidden Bayesan structure of what "relevantly 'like me'" is even supposed to mean—the sense in which a psychological theory can be "true".) -Then Part 3 can cover my theory and retrospective on the political incentives driving the ASTONISHING cowardice, mendacity, and doublethink of everyone of significance in the so-called "rationalist" community on this issue—not just the empirical psychology question (which is perhaps forgiveable—psychology is hard!), but also the drop-dead basics of our OWN philosophy of language, which dispute was SO absurd and took SO much time and effort to correct even within the "community" that it would be COMICAL if it weren't for the whole "the entire future history of the universe depends on these motherfuckers thinking clearly" thing. +Then Part 3 can cover my theory and retrospective on the political incentives driving the ASTONISHING cowardice, mendacity, and doublethink of everyone of significance in the so-called "rationalist" "community" on this issue—not just the empirical psychology question (which would be forgiveable—psychology is hard!), but also the drop-dead basics of their OWN philosophy of language, which dispute was SO absurd and took SO much time and effort to correct even within the "community" that it would be COMICAL if it weren't for the whole "the entire future history of the universe depends on these motherfuckers thinking clearly" thing. And then I'll be done and can move on with my life!! -- 2.17.1