But this blog is not about _not_ attacking my friends. This blog is about the truth. For my own sanity, for my own emotional closure, I need to tell the story as best I can. If it's an _incredibly boring and petty_ story about me getting _unreasonably angry_ about philosophy-of-language minutiæ, well, you've been warned. If the story makes me look bad in the reader's eyes (because you think I'm crazy for getting so unreasonably angry about philosophy-of-language minutiæ), then I shall be happy to look bad for _what I actually am_. (If _telling the truth_ about what I've been obsessively preoccupied with all year makes you dislike me, then you probably _should_ dislike me. If you were to approve of me on the basis of _factually inaccurate beliefs_, then the thing of which you approve, wouldn't be _me_.)
-So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _systematically correct reasoning_. Starting with the shared canon of knowledge of [cognitive biases](https://www.lesswrong.com/posts/jnZbHi873v9vcpGpZ/what-s-a-bias-again), [reflectivity](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection), and [Bayesian probability theory](http://yudkowsky.net/rational/technical/) bequeathed to us by our founder the [Great Teacher](http://yudkowsky.net/rational/virtues/), _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)—and [not just out of a duty towards some philosophical ideal of Truth](https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth), but as a result of _understanding how intelligence works_—[the reduction of "thought"](https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design) to [_cognitive algorithms_](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms). Intelligent systems that construct predictive models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals.
+So, I've spent basically my entire adult life in this insular little intellectual subculture that was founded in the late 'aughts on an ideal of _systematically correct reasoning_. Starting with the shared canon of knowledge of [cognitive biases](https://www.lesswrong.com/posts/jnZbHi873v9vcpGpZ/what-s-a-bias-again), [reflectivity](https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection), and [Bayesian probability theory](http://yudkowsky.net/rational/technical/) bequeathed to us by our founder, _we_ were going to make serious [collective](https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline) [intellectual progress](https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible) in a way that had [never been done before](https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)—and [not just out of a duty towards some philosophical ideal of Truth](https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth), but as a result of _understanding how intelligence works_—[the reduction of "thought"](https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design) to [_cognitive algorithms_](https://www.lesswrong.com/posts/HcCpvYLoSFP4iAqSz/rationality-appreciating-cognitive-algorithms). Intelligent systems that construct predictive models of the world around them—that have "true" "beliefs"—can _use_ those models to compute which actions will best achieve their goals.
-(Oh, and there was also [this part about](https://intelligence.org/files/AIPosNegFactor.pdf) how [the entire future of humanity and the universe depended on](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) our figuring out how to reflect human values in a recursively self-improving artificial superintelligence. That part's complicated.)
+Oh, and there was also [this part about](https://intelligence.org/files/AIPosNegFactor.pdf) how [the entire future of humanity and the universe depended on](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) our figuring out how to reflect human values in a recursively self-improving artificial superintelligence. That part's complicated.
-I guess I feel pretty naïve now, but—I _actually believed our own propoganda_. I _actually thought_ we were doing something new and special of historical and possibly even _cosmological_ significance.
+What are you looking at me like that for? [It's not a cult!](https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness)
-This does not seem remotely credible to me any more. I should explain. _Not_ because I expect anyone to actually read this petty Diary-like post, much less change their mind about anything because of it. I should explain for my own mental health. For closure. The sooner I manage to get the Whole Dumb Story _written down_, the sooner I can stop grieving and _move on with my life_. (However many decades that turns out to be. The part about superintelligence eventually destroying the world still seems right; it's just the part about there existing a systematically-correct-reasoning community poised to help save it that seems fake now.)
+At least, it [_wasn't_](https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult) a cult. I guess I feel pretty naïve now, but—I _actually believed our own propoganda_. I _actually thought_ we were doing something new and special of historical and possibly even _cosmological_ significance.
-(A _secondary_ reason for explaining, is that it could _possibly_ function as a useful warning to the next guy to end up in an analogous situation of trusting the branded systematically-correct-reasoning community to actually be interested in doing systematically correct reasoning, and incurring a lot of wasted effort and pain [making an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) to [try to](https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth) correct the situation. But I don't know how common that is.)
+This does not seem remotely credible to me any more. I should explain. _Not_ because I expect anyone to actually read this melodramatic might-as-well-be-a-Diary-entry, much less change their mind about anything because of it. I should explain for my own mental health. For closure. The sooner I manage to get the Whole Dumb Story _written down_, the sooner I can stop grieving and _move on with my life_. (However many decades that turns out to be. The part about superintelligence eventually destroying the world still seems right; it's just the part about there existing a systematically-correct-reasoning community poised to help save it that seems fake now.)
-I fear the explanation requires some personal backstory about me. I ... almost don't want to tell the backstory, because the thing I've been upset about all year is that I thought a systematically-correct-reasoning community should be able to correct a _trivial_ philosophy-of-language mistake which has nothing to do with me, and it was pretty frustrating when some people seemed to ignore the literal content of my careful very narrowly-scoped knockdown philosophy-of-language argument, and dismiss me with, "Oh, you're just upset about your personal thing (which doesn't matter)." So part of me is afraid that such a person reading the parts of this post that are about the ways in which I _am_, in fact, _really upset_ about my personal thing (which I _don't_ expect anyone else to care about), might take it as vindication that they were correct to be dismissive of my explicit philosophical arguments (which I _did_ expect others to take seriously).
+(A _secondary_ reason for explaining, is that it could _possibly_ function as a useful warning to the next guy to end up in an similar situation of trusting the branded systematically-correct-reasoning community to actually be interested in doing systematically correct reasoning, and incurring a lot of wasted effort and pain [making an extraordinary effort](https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort) to [try to](https://www.lesswrong.com/posts/XqvnWFtRD2keJdwjX/the-useful-idea-of-truth) correct the situation. But I don't know how common that is.)
-But I shouldn't let that worry control what I write in _this_ post, because _this_ post isn't about making arguments that might convince anyone of anything: I _already_ made my arguments elsewhere, and it _didn't work_. _This_ post is about telling the story about that, so that I can finish grieving for the systematically-correct-reasoning community that I _thought_ I had.
+I fear the explanation requires some personal backstory about me. I ... almost don't want to tell the backstory, because the thing I've been upset about all year is that I thought a systematically-correct-reasoning community worthy of the brand name should be able to correct a _trivial_ philosophy-of-language error which has nothing to do with me, and it was pretty frustrating when some people seemed to ignore the literal content of my careful very narrowly-scoped knockdown philosophy-of-language argument, and dismiss me with, "Oh, you're just upset about your personal thing (which doesn't matter)." So part of me is afraid that such a person reading the parts of this post that are about the ways in which I _am_, in fact, _really upset_ about my personal thing (which I _don't_ expect anyone else to care about), might take it as vindication that they were correct to be dismissive of my explicit philosophical arguments (which I _did_ expect others to take seriously).
-So. Some backstory about me. Ever since I was thirteen years old—
+But I shouldn't let that worry control what I write in _this_ post, because _this_ post isn't about making arguments that might convince anyone of anything: I _already_ made my arguments, and it _didn't work_. _This_ post is about telling the story about that, so that I can finish grieving for the systematically-correct-reasoning community that I _thought_ I had, and make peace with the world I _actually_ live in.
+
+So, some backstory about me. Ever since I was thirteen years old—
(and I _really_ didn't expect to be blogging about this eighteen years later)
(I _still_ don't want to be blogging about this, but it actually turns out to be relevant to the story about trying to correct a philosophy-of-language mistake)
-—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... want to write more about the phenomenology of this, some time. I don't think the details are important here.
+—my _favorite_—and basically only—masturbation fantasy has always been some variation on me getting magically transformed into a woman. I ... need to write more about the phenomenology of this, some time. I don't think the details are that important here? Maybe read the ["Man, I Feel Like a Woman" TV Tropes page](https://tvtropes.org/pmwiki/pmwiki.php/Main/ManIFeelLikeAWoman) and consider that the page wouldn't have so many entries if some male writers didn't have a reason to be _extremely interested_ in _that particular fantasy scenario_.
-So, there was that erotic thing, which I was pretty ashamed of at least, at first), and _of course_ never told a single soul. (It would have been about three years since the fantasy started that I even worked up the bravery to tell my Diary about it, in the addendum to entry number 53 on 8 March 2005.)
+So, there was that erotic thing, which I was pretty ashamed of at the time, and _of course_ knew that I must never tell a single soul about. (It would have been about three years since the fantasy started that I even worked up the bravery to tell my Diary about it, in the addendum to entry number 53 on 8 March 2005.)
-But within a couple years, I also developed this beautiful pure sacred self-identity thing, where I was also having a lot of thoughts about being a girl. Just—little day-to-day thoughts. Like when I would write in my pocket notebook as my female analogue. Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _z_)](TODO: linky "jazzy puppy" demo image) because I thought my handwriting look more feminine. [TODO: another anecdote]
+But within a couple years, I also developed this beautiful pure sacred self-identity thing, where I was also having a lot of non-sexual thoughts about being a girl. Just—little day-to-day thoughts. Like when I would write in my pocket notebook as my female analogue. Or when I would practice swirling the descenders on all the lowercase letters that had descenders [(_g_, _j_, _p_, _y_, _z_)](/images/handwritten_phrase_jazzy_puppy.jpg) because I thought my handwriting look more feminine. [TODO: another anecdote]
Now, of course I had _heard of_ there being such a thing as transsexualism.
[...]
-(I'm avoiding naming anyone in this post even when linking to their public writings, in order to try to keep the _rhetorical emphasis_ on "true tale of personal heartbreak, coupled with sober analysis of the sociopolitical factors leading thereto" even while I'm ... ah, expressing disappointment with people's performance. This isn't supposed to be character/reputation attack on my friends and intellectual heroes—I just _need to tell the story_ about why I've been crazy all year so that I can stop grieving and _move on_.)
+(I'm avoiding naming anyone in this post even when linking to their public writings, in order to try to keep the _rhetorical emphasis_ on "true tale of personal heartbreak, coupled with sober analysis of the sociopolitical factors leading thereto" even while I'm expressing disappointment with people's performance. This isn't supposed to be character/reputation attack on my friends and intellectual heroes—I just _need to tell the story_ about why I've been crazy all year so that I can stop grieving and _move on_.)
[...]
[This is something where I _actually need the right answer_]
Ultimately, I think this is a pedagogy decision that Eliezer got right. If you write your summary slogan in relativist language, people predictably take that as license to believe whatever they want without having to defend it. Whereas if you write your summary slogan in objectivist language—so that people know they don't have social permission to say that "it's subjective so I can't be wrong"—then you have some hope of sparking a useful discussion about the exact, precise ways that specific, definite things are, in fact, relative to other specific, definite things.
+
+
+Great at free speech norms, there's a level above free speech where you _converge on the right answer
+
+(I cried my tears for three good years; you can't be mad at me.)
+
+a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation
+
+Technical mistake
+
+_politically load-bearing_ philosophy mistake.
+
+
+https://economicsofgender.tumblr.com/post/188438604772/i-vaguely-remember-learning-trans-women-are : "for a while nobody argued about the truth or implications of 'trans women are women.' It would be like arguing over whether, in fact, the birthday boy really gets the first piece of cake."
+
+
+So, while I have been seeking out a coalition/bandwagon/flag-rally for the past few weeks, I've tried to be pretty explicit about only expecting buy-in for a minimal flag that says, "'I Can Define a Word Any Way I Want' can't be the end of the debate, because choosing to call things different names doesn't change the empirical cluster-structure of bodies and minds in the world; while the same word might legitimately be used with different definitions/extensions in different contexts, the different definitions imply different probabilistic inferences, so banning one definition as hurtful is an epistemic issue that rationalists should notice because it makes it artificially more expensive to express probabilistic inferences that can be expressed concisely with that definition."
+
+I do usually mention the two-types model at the same time because that's where I think the truth is and it's hard to see the Bayes-structure-of-language problem without concrete examples. (Why is it that that only ~3% of women-who-happen-to-be-cis identify as lesbians, but 60% of women-who-happen-to-be-trans do? If you're careful, you can probably find a way to encode the true explanation in a way that doesn't offend anyone. But if you want to be able to point to the truth concisely—in a way that fits in a Tweet, or to an audience that doesn't know probabilistic graphical models—then "Because trans women are men" needs to be sayable. You don't need to say it when it's not relevant or if a non-rationalist who might be hurt by it is in the room, but it can't be unsayable.)
+
+Do I need to be much louder about the "This philosophy-of-language point can be accepted independently of any empirical claims" disclaimer and much quieter about the empirical claims, because literally no one understands disclaimers!?
+
+(I don't think I'd be saying this in the nearby possible world where Scott Siskind didn't have a traumatizing social-justice-shaming experience in college, but it's true here.)
+
+I don't want to fall into the bravery-debate trap of, "Look at me, I'm so heroically persecuted, therefore I'm right (therefore you should have sex with me)."
+
+
+Strongly agree with this. I have some misgivings about the redpilly coalition-seeking I've been doing recently. My hope has been that it's possible to apply just enough "What the fuck kind of rationalist are you?!" social pressure to cancel out the "You don't want to be a Bad (Red) person, do you??" social pressure and thereby let people look at the arguments. I don't know if that actually works.
+
+"Moshe": "People rightly distrust disclaimers and nearly no one except me & Michael can say so instead of acting like it’s common knowledge with people who don’t fully know this."
+
+Standards! https://srconstantin.wordpress.com/2018/12/24/contrite-strategies-and-the-need-for-standards/