X-Git-Url: http://unremediatedgender.space/source?p=Ultimately_Untrue_Thought.git;a=blobdiff_plain;f=content%2Fdrafts%2Fi-tell-myself-to-let-the-story-end.md;h=2bb2744c3b1e85ebeca5e5bf988e5745e1e5c555;hp=2f16580b1bb8795e87cd89387313b9f5d43e42b8;hb=2fc071e3d84c1ead3fcc932cba95bae1b28a18a7;hpb=98b0918d154f0a97edbaf040a1aadf216d165124 diff --git a/content/drafts/i-tell-myself-to-let-the-story-end.md b/content/drafts/i-tell-myself-to-let-the-story-end.md index 2f16580..2bb2744 100644 --- a/content/drafts/i-tell-myself-to-let-the-story-end.md +++ b/content/drafts/i-tell-myself-to-let-the-story-end.md @@ -1,7 +1,7 @@ Title: "I Tell Myself to Let the Story End"; Or, A Hill of Validity in Defense of Meaning Date: 2020-01-01 -Category: commentary -Tags: personal +Category: other +Tags: personal, my robot cult Status: draft > _And I tell myself to let the story end @@ -14,107 +14,27 @@ Status: draft > > —Sara Barellies, ["Gonna Get Over You"](https://genius.com/Sara-bareilles-gonna-get-over-you-lyrics) -I mostly haven't been doing so well for the past eight months or so. I've been reluctant to write about it in too much detail for poorly-understood psychological reasons. Maybe it feels too much like attacking my friends? Maybe I'm not sure how much I can say without leaking too much information from private conversations? But I need to write _something_—not to attack anyone or spill anyone's secrets, but just to _tell the truth_ about why I've been wasting stretches of days in _constant emotional pain_ all year. For my own healing, for my own sanity. +I mostly haven't been doing so well for the past nine months or so. I mean, I've always been a high-neuroticism person, but this has been a below-average year even by my standards. I've been reluctant to write about it in too much detail for poorly-understood psychological reasons. Maybe it would feel too much like attacking my friends? -So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _absolute truthseeking_. Sure, anyone will _say_ that their beliefs are true, but you can tell most people aren't being very serious about it. _We_ were going to be serious: starting with the shared canon of knowledge of cognitive biases, reflectivity, and Bayesian probability theory bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/), to forge and refine a new mental martial art of _systematically correct reasoning_ that we were going to use to optimize ourselves and the world. +But this blog is not about _not_ attacking my friends. This blog is about the truth. For my own sanity, for my own emotional closure, I need to tell the story as best I can. If it's an incredibly boring and petty story about people getting _unreasonably angry_ about philosophy-of-language minutiæ, well, you've been warned. If the story makes me look bad in the reader's eyes (because you think I'm crazy for getting so unreasonably angry about philosophy-of-language minutiæ), then I shall be happy to look bad for an _accurate_ account of _what I actually am_—I should expect nothing less. -(Oh, and there was also this part about how the uniquely best thing for non-math-geniuses to do with their lives was to earn lots of money and donate it to our founder's nonprofit dedicated to building a recursively self-improving artificial superintelligence to take over the world in order to save our entire future light cone from the coming robot apocalypse. That part's complicated.) -I guess I feel pretty naïve now, but—I _actually believed our own propoganda_. I _actually thought_ we were doing something new and special of historical and possibly even _cosmological_ significance. -And so when I moved to "Portland" (which is actually Berkeley) in 2016, met a lot of trans women in real life for the first time, and did some more reading that convinced me of the at-least-approximate-correctness of the homosexual/autogynephilic two-types theory of MtF transgenderism that I had previously assumed was false (while being privately grateful that [there was a _word_ for my thing](/2017/Feb/a-beacon-through-the-darkness-or-getting-it-right-the-first-time/)) because everyone _said_ it was false -We're all about, like, science and rationality and stuff, right? And so if there's a theory that's been sitting in the psychology literature for twenty years, that looks _correct_ (or at least, ah, [less wrong](https://tvtropes.org/pmwiki/pmwiki.php/Main/TitleDrop) than the mainstream view), that's _directly_ relevant to a _lot_ of people around here, that seems like the sort of thing -https://www.lesswrong.com/posts/9KvefburLia7ptEE3/the-correct-contrarian-cluster -I confess that I _may have [overreacted](/2017/Mar/fresh-princess/) [somewhat](/2017/Jun/memoirs-of-my-recent-madness-part-i-the-unanswerable-words/)_ when people weren't converging (or [even engaging](/2017/Jan/im-sick-of-being-lied-to/)) with me on the two-types/autogynephilia thing. Psychology is a genuinely difficult empirical science +This is _basic shit_. As we say locally, this is _basic Sequences shit_. -I would _never_ write someone off for disagreeing with me about a complicated empirical thing, because complicated empirical things are complicated enough that I have to [take the Outside View seriously](https://www.overcomingbias.com/2007/07/beware-the-insi.html): no matter how "obvious" I think my view is, I might still be wrong for real in real life. So, while I was pretty upset for my own idiosyncratic personal reasons, it wasn't cause to _give up entirely on the dream of a rationality community_. -A.T. and R.B.'s Facebook comments -emphasize that the categories-are-relative thing is an important grain of truth, but that I expect _us_ to see deeper into the Bayes-structure +Now, it's not obvious that I _shouldn't_ cut my dick off! A lot of people seem to be doing it nowadays, and a lot of them seem to be pretty happy with their decision! But in order to _decide_ whether it's a good idea, I need _accurate information_ -this is _really basic shit_ +, so that I can cut my dick off in the possible worlds where that's a good idea, and not cut my dick off in the possible worlds where that's not a good idea. -The way this is supposed to work is that you just make your arguments and trust that good arguments will outcompete bad ones; emailing people begging for a clarification is kind of rude and I want to acknowledge the frame in which I'm the bad guy (or pitably mentally ill)—but I was taught that arguing with people when they're doing something wrong is actually doing them a _favor_—I was taught that it's virtuous to make an extraordinary effort +actively manufacture _fake rationality lessons_ that have been optimized to _confuse me into cutting my dick off_ independently of whether or not we live in a world -bad-faith nitpicker—I would be annoyed if someone repeatedly begged me to correct a mistake I made in a blog post from five years ago or a Tweet from November -Losing faith in guided-by-the-beauty-of-our-weapons - -https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science -http://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/ - -"I ought to accept ... within the conceptual boundaries" is a betrayal of everything we stand for - -(I don't consider "if it'll save someone's life" to be a compelling consideration here, for the same reason that "What if Omega punishes all agents who don't choose the alphabetically first option?" doesn't seem like a compelling argument against timeless decision theory. Specifying punishments for agents that don't follow a particular ritual of cognition doesn't help us understand the laws that make intelligence work.) - -when _yet another_ (higher-profile, but more careful, this time only committing the error by Grecian implicature rather than overtly—if you're being careful to disclaim most obvious misinterpretations) people comitting the fallacy, I _flipped out_ - -The sheer number of hours we invested in this operation is _nuts_: desperate all-out effort, arguing over email with two ppl who were higher-status than me and fighting an entire Discord server three times, $1000, three postcards - -what about _my_ mental health? - -Men who wish they were women do not particularly resemble actual women? We just—don't? This seems kind of obvious, really? - -my friend thinks I'm naive to have expected such a community—she was recommending "What You Can't Say" in 2009—but in 2009, we did not expect that _whether or not I should cut my dick off_ would _become_ a politicized issue, which is new evidence about the wisdom of the original vision - -but if my expectations (about the community) were wrong, that's a problem with my model; reality doesn't have to care - -it's naive to think you can win against an egregore 1000 times bigger than you - -MASSIVE cognitive dissonance, "What? What???" - -the Church - -won't you be embarrassed to leave if we create utopia - -invent a fake epistemology lesson - -we live in a world where reason doesn't work - -_not_ gaslight me about the most important thing in my life? - -I don't think I'm setting my price for joining particularly high here? - -if you're doing systematically correct reasoning, you should be able to get the right answer even on things that don't matter - -There could be similarly egregious errors that I'm not as sensitive too - -I don't think you can build an aligned superintelligence from a culture this crazy - -it's not fair to expect ordinary people to understand the univariate fallacy before they're allowed to say "men aren't women" - -maybe S. was trying to talk about "legal fiction" categories, but I'm trying to talk about epistemology, and that's a probable reading when you say "categories" - -Hansonian mental illness that people should be less accomodating of - -there has to be a way to apply a finite amount of effort to _correct_ errors, and possibly even profit from correcting errors - -(I'm not making this up! I _couldn't_ make this up!) - -the frame in which I'm - -outside view hobby-horse - -standards - -cognitive dissonance - -smart people clearly know - -free-speech norms - -I'll be alright. Just not tonight. - -maybe 50,000 years from now we'll all be galatic superminds and laugh about all this - -(probably tell the story with no external links, only this-blog links) - -I'll be alright. Just not tonight. But someday.