... and, really, that _should_ have been the end of the story. Not much of a story at all. If I hadn't been further provoked, I would have still kept up this blog, and I still would have ended up arguing about gender with people occasionally, but this personal obsession of mine wouldn't have been the occasion of a full-on robot-cult religious civil war involving other people who had much more important things to do with their time.
-The _causis belli_ for the religious civil war happened on 28 November 2018. I was at my new dayjob's company offsite event in Austin. Coincidentally, I had already spent much of the afternoon arguing trans issues with other "rationalists" on Discord. [TODO: review Discord logs; email to Dad suggests that offsite began on the 26th, contrasted to first shots on the 28th]
+The _causis belli_ for the religious civil war happened on 28 November 2018. I was at my new dayjob's company offsite event in Austin. Coincidentally, I had already spent much of the afternoon arguing trans issues with other "rationalists" on Discord.
+
+[TODO: review Discord logs; email to Dad suggests that offsite began on the 26th, contrasted to first shots on the 28th]
Just that month, I had started a Twitter account in my own name, inspired in an odd way by the suffocating [wokeness of the open-source software scene](/2018/Oct/sticker-prices/) where I [occasionally contributed diagnostics patches to the compiler](https://github.com/rust-lang/rust/commits?author=zackmdavis). My secret plan/fantasy was to get more famous and established in the that world (one of compiler team membership, or conference talk accepted, preferably both), get some corresponding Twitter followers, and _then_ bust out the [@BlanchardPhd](https://twitter.com/BlanchardPhD) retweets and links to this blog. In the median case, absolutely nothing would happen (probably because I failed at being famous), but I saw an interesting tail of scenarios in which I'd get to be a test case in [the Code of Conduct wars](https://techcrunch.com/2016/03/05/how-we-may-mesh/).
If our vaunted rationality techniques resulted in me having to spend dozens of hours patiently explaining why I didn't think that I was a woman and that [the person in this photograph](https://daniellemuscato.startlogic.com/uploads/3/4/9/3/34938114/2249042_orig.jpg) wasn't a woman, either (where "isn't a woman" is a _convenient rhetorical shorthand_ for a much longer statement about [naïve Bayes models](https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes) and [high-dimensional configuration spaces](https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace) and [defensible Schelling points for social norms](https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes)), then our techniques were _worse than useless_.
-If Galileo ever muttered "And yet it moves", there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense, or some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in November 2014, but in retrospect, _maybe_ it was a _bad_ idea to build a [memetic superweapon](https://archive.is/VEeqX) that says that the number of epicycles _doesn't matter_.
+[If Galileo ever muttered "And yet it moves"](https://en.wikipedia.org/wiki/And_yet_it_moves), there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense, or some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in November 2014, but in retrospect, _maybe_ it was a _bad_ idea to build a [memetic superweapon](https://archive.is/VEeqX) that says that the number of epicycles _doesn't matter_.
And the reason to write this as a desperate email plea to Scott Alexander when I could be working on my own blog, was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arise, what actually happens is that people like Alexander and Yudkowsky rise to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else sort-of absorbs some of their worldview (plus noise and [conformity with the local environment](https://thezvi.wordpress.com/2017/08/12/what-is-rationalist-berkleys-community-culture/)). So for people who didn't [win the talent lottery](http://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) but think they see a flaw in the _Zeitgeist_, the winning move is "persuade Scott Alexander."
Michael said that it seemed important that, if we thought Yudkowsky wasn't interested, we should have common knowledge among ourselves that we consider him to be choosing to be a cult leader.
-[I](https://www.youtube.com/watch?v=TqamOOSdeHs) [settled](https://www.youtube.com/watch?v=TF18bz2j5PM) [on](https://www.youtube.com/watch?v=Hny1prRDE3I) [Sara](https://www.youtube.com/watch?v=emdVSVoCLmg) [Barellies's](https://www.youtube.com/watch?v=jZMQ0OKVO80&t=112s) ["Gonna Get Over You"](https://www.youtube.com/watch?v=OUe3oVlxLSA) as my breakup song with Yudkowsky and the rationalists, often listening to a cover of it on loop to numb the pain. ("And I tell myself to let the story end / And my heart will rest in someone else's hand"—Michael Vassar's.)
+I settled on Sara Barellies's ["Gonna Get Over You"](https://www.youtube.com/watch?v=OUe3oVlxLSA) as my breakup song with Yudkowsky and the rationalists, often listening to [a cover of it](https://www.youtube.com/watch?v=emdVSVoCLmg) on loop to numb the pain. ("And I tell myself to let the story end / And my heart will rest in someone else's hand"—Michael Vassar's.)
Meanwhile, my email thread with Scott got started back up again, although I wasn't expecting anything public to come out of it. I expressed some regret that all the times I had emailed him over the past couple years had been when I was upset about something (like psych hospitals, or—something else) and wanted something from him, which was bad, because it was treating him as a means rather than an end—and then, despite that regret, continued prosecuting the argument.
_I_ had been hyperfocused on prosecuting my Category War, but the reason Michael and Ben and Jessica were willing to help me out on that, was not because they particularly cared about the gender and categories example, but because it seemed like a manifestation of a _more general_ problem of epistemic rot in "the community".
-Ben had previously written a lot about problems with Effective Altruism. Jessica had had a bad time at MIRI, as she had told me back in March, and would [later](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards). To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit?
+Ben had [previously](http://benjaminrosshoffman.com/givewell-and-partial-funding/) [written](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/) a lot [about](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/) [problems](http://benjaminrosshoffman.com/against-responsibility/) [with](http://benjaminrosshoffman.com/against-neglectedness/) Effective Altruism. Jessica had had a bad time at MIRI, as she had told me back in March, and would [later](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) [write](https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) [about](https://www.lesswrong.com/posts/pQGFeKvjydztpgnsY/occupational-infohazards). To what extent were my thing, and Ben's thing, and Jessica's thing, manifestations of "the same" underlying problem? Or had we all become disaffected with the mainstream "rationalists" for our own idiosyncratic reasons, and merely randomly fallen into each other's, and Michael's, orbit?
I believed that there _was_ a real problem, but didn't feel like I had a good grasp on what it was specifically. Cultural critique is a fraught endeavor: if someone tells an outright lie, you can, maybe, with a lot of effort, prove that to other people, and get a correction on that specific point. (Actually, as we had just discovered, even that might be too much to hope for.) But _culture_ is the sum of lots and lots of little micro-actions by lots and lots of people. If your _entire culture_ has visibly departed from the Way that was taught to you in the late 'aughts, how do you demonstrate that to people who, to all appearances, are acting like they don't remember the old Way, or that they don't think anything has changed, or that they notice some changes but think the new way is better. It's not as simple as shouting, "Hey guys, Truth matters!"—any ideologue or religious person would agree with _that_.
Ben called it the Blight, after the rogue superintelligence in _A Fire Upon the Deep_: the problem wasn't that people were getting dumber; it's that there was locally coherent coordination away from clarity and truth and towards coalition-building, which was validated by the official narrative in ways that gave it a huge tactical advantage; people were increasingly making decisions that were better explained by their political incentives rather than acting on coherent beliefs about the world.
-When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of MIRI executive director Nate Soares posting that he was "excited" about the launch of OpenAI, despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified to Ben that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified."
+When I asked him for specific examples of MIRI or CfAR leaders behaving badly, he gave the example of [MIRI executive director Nate Soares posting that he was "excited to see OpenAI joining the space"](https://intelligence.org/2015/12/11/openai-and-other-news/), despite the fact that [_no one_ who had been following the AI risk discourse](https://slatestarcodex.com/2015/12/17/should-ai-be-open/) [thought that OpenAI as originally announced was a good idea](http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/). Nate had privately clarified to Ben that the word "excited" wasn't necessarily meant positively, and in this case meant something more like "terrified."
This seemed to me like the sort of thing where a particularly principled (naive?) person might say, "That's _lying for political reasons!_ That's _contrary to the moral law!_" and most ordinary grown-ups would say, "Why are you so upset about this? That sort of strategic phrasing in press releases is just how the world works, and things could not possibly be otherwise."
-I thought explaining the Blight to an ordinary grown-up was going to need _either_ lots of specific examples that were way more egregious than this (and more egregious than the examples in "EA Has a Lying Problem" or ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world (perhaps a private place) with _unusually high standards_.
+I thought explaining the Blight to an ordinary grown-up was going to need _either_ lots of specific examples that were way more egregious than this (and more egregious than the examples in ["EA Has a Lying Problem"](https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html) or ["Effective Altruism Is Self-Recommending"](http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/)), or somehow convincing the ordinary grown-up why "just how the world works" isn't good enough, and why we needed one goddamned place in the entire goddamned world (perhaps a private place) with _unusually high standards_.
The schism introduced new pressures on my social life. On 20 April, I told Michael that I still wanted to be friends with people on both sides of the factional schism (in the frame where recent events were construed as a factional schism), even though I was on this side. Michael said that we should unambiguously regard Anna and Eliezer as criminals or enemy combatants (!!), that could claim no rights in regards to me or him.
> If we can't even get a public consensus from our _de facto_ leadership on something _so basic_ as "concepts need to carve reality at the joints in order to make probabilistic predictions about reality", then, in my view, there's _no point in pretending to have a rationalist community_, and I need to leave and go find something else to do (perhaps whatever Michael's newest scheme turns out to be). I don't think I'm setting [my price for joining](https://www.lesswrong.com/posts/Q8evewZW5SeidLdbA/your-price-for-joining) particularly high here?
-And as it happened, on 5 May, Yudkowsky reTweeted Colin Wright on the "univariate fallacy"—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feel a lot less personally aggrieved.) Was I wrong to interpet this as another "concession" to me? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)
+And as it happened, on 4 May, Yudkowsky [reTweeted Colin Wright on the "univariate fallacy"](https://twitter.com/ESYudkowsky/status/1124751630937681922)—the point that group differences aren't a matter of any single variable—which was _sort of_ like the clarification I had been asking for. (Empirically, it made me feel a lot less personally aggrieved.) Was I wrong to interpet this as another "concession" to me? (Again, notwithstanding that the whole mindset of extracting "concessions" was corrupt and not what our posse was trying to do.)
Separately, I visited some friends' house on 30 April saying, essentially (and sincerely), "[Oh man oh jeez](https://www.youtube.com/watch?v=NivwAQ8sUYQ), Ben and Michael want me to join in a rationalist civil war against the corrupt mainstream-rationality establishment, and I'd really rather not, and I don't like how they keep using scary hyperbolic words like 'cult' and 'war' and 'criminal', but on the other hand, they're _the only ones backing me up_ on this _incredibly basic philosophy thing_ and I don't feel like I have anywhere else to _go_." The ensuring group conversation made some progress, but was mostly pretty horrifying.
In an adorable twist, my friends' two-year-old son was reportedly saying the next day that Kelsey doesn't like his daddy, which was confusing until it was figured out he had heard Kelsey talking about why she doesn't like Michael _Vassar_.
-And as it happened, on 8 May, Kelsey wrote a Facebook comment displaying evidence of understanding my point.
+And as it happened, on 7 May, Kelsey wrote [a Facebook comment displaying evidence of understanding my point](https://www.facebook.com/julia.galef/posts/pfbid0QjdD8kWAZJMiczeLdMioqmPkRhewcmGtQpXRBu2ruXq8SkKvw5yvvSH2cWVDghWRl?comment_id=10104430041947222&reply_comment_id=10104430059182682).
These two datapoints led me to a psychological hypothesis (which was maybe "obvious", but I hadn't thought about it before): when people see someone wavering between their coalition and a rival coalition, they're motivated to offer a few concessions to keep the wavering person on their side. Kelsey could _afford_ (_pace_ Upton Sinclair) to not understand the thing about sex being a natural category ("I don't think 'people who'd get surgery to have the ideal female body' cuts anything at the joints"!!) when it was just me freaking out alone, but "got it" almost as soon as I could credibly threaten to _walk_ (defect to a coalition of people she dislikes) ... and maybe my "closing thoughts" email had a similar effect on Yudkowsky (assuming he otherwise wouldn't have spontaneously tweeted something about the univariate fallacy two weeks later)?? This probably wouldn't work if you repeated it (or tried to do it consciously)?
[TODO: plan to reach out to Rick]
-[TODO: December tussle with Scott, and, a Christmas party—
+[TODO:
+Scott asked me who to ask about survey design on 20 December, I recommended Tailcalled
+https://slatestarcodex.com/2019/12/30/please-take-the-2020-ssc-survey/
+https://slatestarcodex.com/2020/01/20/ssc-survey-results-2020/
+
Scott replies on 21 December https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=LJp2PYh3XvmoCgS6E
-> since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences
+on Christmas Eve, I gave in to the urge to blow up at him
+
+https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=xEan6oCQFDzWKApt7
+https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=wFRtLj2e7epEjhWDH
+https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=8DKi7eAuMt7PBYcwF
+
+> Like, sibling comments are _very_ not-nice, but I argue that they meet the Slate Star commenting policy guidelines on account of being both true and necessary.
-I snapped https://www.lesswrong.com/posts/bSmgPNS6MTJsunTzS/maybe-lying-doesn-t-exist?commentId=xEan6oCQFDzWKApt7
+December tussle with Scott, and, a Christmas party—
Christmas party
playing on a different chessboard
Having considered all this, here's what I think I can say: I spent many hours in the first half of 2020 working on a private Document about a disturbing hypothesis that had occured to me.
-Previously, I had _already_ thought it was nuts that trans ideology was exerting influence the rearing of gender-non-conforming children, that is, children who are far outside the typical norm of _behavior_ (_e.g._, social play styles) for their sex: very tomboyish girls and very feminine boys. Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults [has been meta-analyzed at _d_ ≈ 1.31 for men and _d_ ≈ 0.96 for women](/papers/bailey-zucker-childhood_sex-typed_behavior_and_sexual_orientation.pdf).) A solid supermajority of children diagnosed with gender dysphoria [ended up growing out of it by puberty](/papers/steensma_et_al-factors_associated_with_desistence_and_persistence.pdf). In the culture of the current year, it seemed likely that a lot of those kids would get affirmed into a cross-sex identity (and being a lifelong medical patient) much earlier, even though most of them would have otherwise (under a "watchful waiting" protocol) grown up to be ordinary gay men and lesbians.
+Previously, I had _already_ thought it was nuts that trans ideology was exerting influence the rearing of gender-non-conforming children, that is, children who are far outside the typical norm of _behavior_ (_e.g._, social play styles) for their sex: very tomboyish girls and very feminine boys. Under recent historical conditions in the West, these kids were mostly "pre-gay" rather than trans. (The stereotype about lesbians being masculine and gay men being feminine is, like most stereotypes, basically true: sex-atypical childhood behavior between gay and straight adults [has been meta-analyzed at _d_ ≈ 1.31 for men and _d_ ≈ 0.96 for women](/papers/bailey-zucker-childhood_sex-typed_behavior_and_sexual_orientation.pdf).) A solid supermajority of children diagnosed with gender dysphoria [ended up growing out of it by puberty](/papers/steensma_et_al-factors_associated_with_desistence_and_persistence.pdf). In the culture of the current year, it seemed likely that a lot of those kids would get affirmed into a cross-sex identity (and being a lifelong medical patient) much earlier, even though most of them would have otherwise (under [a "watchful waiting" protocol](/papers/de_vries-cohen-kettenis-clinical_management_of_gender_dysphoria_in_children.pdf)) grown up to be ordinary gay men and lesbians.
What made this crazy, in my view, was not just that child transition is a dubious treatment decision, but that it's a dubious treatment decision made on the basis of the obvious falsehood that "trans" was one thing: the cultural phenomenon of "trans kids" was being used to legitimize trans _adults_, even though the vast supermajority of trans adults were in the AGP taxon and therefore _had never resembled_ these HSTS-taxon kids. That is: pre-gay kids are being sterilized in order to affirm the narcissistic delusions of _guys like me_.
At this point, some people would argue that I'm being too uncharitable in harping on the "not liking to be tossed into a [...] Bucket" paragraph. The same post does _also_ explicitly says that "[i]t's not that no truth-bearing propositions about these issues can possibly exist." I _agree_ that there are some interpretations of "not lik[ing] to be tossed into a Male Bucket or Female Bucket" that make sense, even though biological sex denialism does not make sense. Given that the author is Eliezer Yudkowsky, should I not give him the benefit of the doubt and assume that he "really meant" to communicate the reading that does make sense, rather than the one that doesn't make sense?
-I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](https://www.hpmor.com/chapter/97) Yudkowsky is just _too talented of a writer_ for me to excuse his words as an accidental artifact of unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about, I think it's _deliberately_ ambiguous. When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some other channel than their words straightforwardly reflecting the truth. Someone who was _actually_ stupid wouldn't be able to generate text with a specific balance of insight and selective stupidity fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point of the post is to pander to the biological sex denialists in his robot cult, without technically saying anything unambiguously false that someone could point out as a "lie."
+I reply: _given that the author is Eliezer Yudkowsky_, no, obviously not. I have been ["trained in a theory of social deception that says that people can arrange reasons, excuses, for anything"](https://www.glowfic.com/replies/1820866#reply-1820866), such that it's informative ["to look at what _ended up_ happening, assume it was the _intended_ result, and ask who benefited."](http://www.hpmor.com/chapter/47) Yudkowsky is just _too talented of a writer_ for me to excuse his words as an accidental artifact of unclear writing. Where the text is ambiguous about whether biological sex is a real thing that people should be able to talk about, I think it's _deliberately_ ambiguous. When smart people act dumb, it's often wise to conjecture that their behavior represents [_optimized_ stupidity](https://www.lesswrong.com/posts/sXHQ9R5tahiaXEZhR/algorithmic-intent-a-hansonian-generalized-anti-zombie)—apparent "stupidity" that achieves a goal through some other channel than their words straightforwardly reflecting the truth. Someone who was _actually_ stupid wouldn't be able to generate text with a specific balance of insight and selective stupidity fine-tuned to reach a gender-politically convenient conclusion without explicitly invoking any controversial gender-political reasoning. I think the point of the post is to pander to the biological sex denialists in his robot cult, without technically saying anything unambiguously false that someone could point out as a "lie."
Consider the implications of Yudkowsky giving as a clue as to the political forces as play in the form of [a disclaimer comment](https://www.facebook.com/yudkowsky/posts/10159421750419228?comment_id=10159421833274228):
]
-[TODO: sneering at post-rats; David Xu interprets criticism of Eliezer as me going "full post-rat"?!
+[TODO: sneering at post-rats; David Xu interprets criticism of Eliezer as me going "full post-rat"?! 6 September 2021
> Also: speaking as someone who's read and enjoyed your LW content, I do hope this isn't a sign that you're going full post-rat. It was bad enough when QC did it (though to his credit QC still has pretty decent Twitter takes, unlike most post-rats).
You've got to be strong to survive in the [O-ring sector](https://en.wikipedia.org/wiki/O-ring_theory_of_economic_development)
(I can actually see the multiplicative "tasks are intertwined and have to all happen at high reliability in order to create value" thing playing out in the form of "if I had fixed this bug earlier, then I would have less manual cleanup work", in contrast to the "making a bad latte with not enough foam, that doesn't ruin any of the other customer's lattes" from my Safeway-Starbucks-kiosk days)
+
+------
+
+Discord messaging with Scott in October 2021—
+
+However subjectively sincere you are, I kind of have to read your comments as hostile action in the rationalist civil war. (Your claim that "it’s fair for the community to try to defend itself" seems to suggest you agree that this is a somewhat adversarial conversation, even if you think Jessica shot first.) A defense lawyer has an easier job than a rationalist—if the prosecution makes a terrible case, you can just destroy it, without it being your job to worry about whether your client is separately guilty of vaguely similar crimes (that the incompetent prosecution can't prove).
+
+[context for "it's fair for the community to defend itself"—
+https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=qsEMmdo6DKscvBvDr
+
+> I more or less Outside View agree with you on this, which is why I don't go around making call-out threads or demanding people ban Michael from the community or anything like that (I'm only talking about it now because I feel like it's fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) "This guy makes people psychotic by talking to them" is a silly accusation to go around making, and I hate that I have to do it!
+]
+
+
+[note: comment this is quoting is at 92 karma in 40 votes]
+> hey just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick.
+
+I will absolutely bite this bullet. You once wrote a parable (which I keep citing) about [a Society in which it becomes politically fashionable to claim that thunder comes before lightning](https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-and-the-parable-of-lightning/). Imagine living in that Society in the brief period where the taboo is being established. Imagine taking epistemic rationality seriously, thinking that your friends and community leaders take it seriously, and trying to make sense of all of them, seemingly in lockstep, suddenly asserting that thunder comes before lightning, and acting like this is perfectly normal. When you point out that no one believed this ten years ago and ask what new evidence came in, they act like they don't remember. When you pull out videos and textbooks to argue that actually, lightning comes before thunder, they dodge with, "Well, it's depends on what you mean by the word 'before.'" (Technically, true!)
+Eventually, you would get used to it, but at first, I think this would be legitimately pretty upsetting! If you were already an emotionally fragile person, it might even escalate to a psychiatric emergency through the specific mechanism "everyone I trust is inexplicably lying about lightning → stress → sleep deprivation → temporary psychosis". (That is, it's not that Society being corrupt directly causes mental ilness—that would be silly—but confronting a corrupt Society is very stressful, and that can [snowball into](https://lorienpsych.com/2020/11/11/ontology-of-psychiatric-conditions-dynamic-systems/) things like lost sleep, and sleep is [actually really](https://www.jneurosci.org/content/34/27/9134.short) [biologically important](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6048360/).)
+
+This is a pretty bad situation to be in—to be faced with the question, "Am I crazy, or is everyone else crazy?" But one thing that would make it slightly less bad is if you had a few allies, or even just an ally—someone to confirm that the obvious answer, "It's not you," is, in fact, obvious. But in a world where everyone who's anyone agrees that thunder comes before lightning—including all the savvy consequentialists who realize that being someone who's anyone is an instrumentally convergent strategy for acquiring influence—anyone who would be so imprudent to take your everyone-is-lying-about-lightning concerns seriously, would have to be someone with ... a nonstandard relationship to social reality. Someone meta-savvy to the process of people wanting to be someone who's anyone. Someone who, honestly, is probably some kind of major asshole. Someone like—Michael Vassar!
+
+
+From the perspective of an outside observer playing a Kolmogorov-complicity strategy, your plight might look like "innocent person suffering from mental illness in need of treatment/management", and your ally as "bad influence who is egging the innocent person on for their own unknown but probably nefarious reasons". If that outside observer chooses to draw the category boundaries of "mental illness" appropriately, that story might even be true. So why not quit making such a fuss, and accept treatment? Why fight, if fighting comes at a personal cost? Why not submit?
+
+I have my answer. But I'm not sure you'd understand.
+
+
+> I disagree with your assessment that your joining the Vassarites wasn't harmful to you
+
+As compared to what? In the counterfactual where Michael vanished from the world in 2016, I think I would have been just as upset about the same things for the same reasons, but with fewer allies and fewer ideas to make sense of what was going on in my social environment.
+
+I've got to say, it's really obnoxious when people have tried to use my association with Michael to try to discredit the content of what I was saying—interpreting me as Michael's pawn.
+Gwen, one of the "Zizians", in a blog post about her grievances against CfAR, has [a section on "Attempting to erase the agency of everyone who agrees with our position"](https://everythingtosaveit.how/case-study-cfar/#attempting-to-erase-the-agency-of-everyone-who-agrees-with-our-position), complaining about how people try to cast her and Somni and Emma as Ziz's minions, rather than acknowledging that they're separate people with their own ideas who had good reasons to work together. I empathized a lot with this. My thing, and separately Ben Hoffman's [thing about Effective Altruism](http://benjaminrosshoffman.com/drowning-children-rare/), and separately Jessica's thing in the OP, don't really have a whole lot to do with each other, except as symptoms of "the so-called 'rationalist' community is not doing what it says on the tin" (which itself isn't a very specific diagnosis). But insofar as our separate problems did have a hypothesized common root cause, it made sense for us to talk to each other and to Michael about them.
+Was Michael using me, at various times? I mean, probably. But just as much, _I was using him_: from my perspective, I was leaning on Michael for social proof for high-variance actions that I wouldn't have been socially brave enough to take unilaterally—but because Michael had my back, and because my brain still had him tagged as a "rationalist" Authority figure form back in the Overcoming Bias days, that made it okay. Particularly with the November 2018–April 2019 thing (where I, with help from Michael and Ben and Sarah and later Jessica, kept repeatedly pestering you and Eliezer to clarify that categories aren't arbitrary—that started because I flipped the fuck out when I saw [Eliezer's "hill of meaning in defense of validity" Twitter performance](https://twitter.com/ESYudkowsky/status/1067185907843756032). That was the "Vassarites" doing an _enormous_ favor for _me_ and _my_ agenda. (If Michael and crew hadn't been there for me, I wouldn't have been anti-social enough to keep escalating.) And you're really going to try to get away with claiming that they were making my situation worse? That's absurd. Have you no shame?
+
+
+
+I just wish you'd notice that it is a choice, and that there is a conflict: the lightning-first dissidents in your parable absolutely should not be applying [mistake theory](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) or the principle of charity to smart thunder-first interlocutors like Kolmogorov, because you've specified in the story that Kolmogorov isn't being intellectually honest! The theory that Kolmogorov is making mere "honest mistakes" makes bad predictions, because actually-honest cognitive mistakes would be random, rather than systematically favoring the orthodoxy.
+
+ Eliezer was offering a vision of a new mental martial art of systematically correct reasoning: "rationalist" in the sense used in the Sequences meant someone who studies rationality (like [Robin Dawes](https://www.amazon.com/gp/product/0155752154/) or E.T. Jaynes), the way "physicist" means someone who studies physics.
+
+Contrast that to how, in the Alicorner conversation the other day, Kelsey expresses anger at me over ["the thought that there are people out there who wanted to be rationalists and then their experience of the rationalist community was relentlessly being told that trans women are actually men [...] and are traumatized by the entire experience of interacting with us"](https://discord.com/channels/401181628015050773/458419017602826260/899705780402552852). I think this sense of "wanting to be a rationalist" (wanting to belong to the social cluster) and making sure that "we" don't alienate potential should be seen as obviously non-normative. I don't care about the movement except insofar as it succeeds in advancing the ideology.
+
+See Eli (with many emoji-agreements) in the Alicorner conversation arguing that I shouldn't have taken "DMs open for questions" literally on the presumption that once someone has declared that they "are trans", theory is unwelcome. I appreciate that this crowd is much more intellectually tolerant than people who are actually woke. But insane religious fantatics who "merely" want heretics to know their place (as opposed to wanting to hurt or exile them) are still insane religious fanatics. How do you live like this?!
+
+But those are cases of different conceptual boundaries (which represent probabilistic inferences about the real world) being conventionally attached to the same word/symbol, such that you should disambiguate which concept is meant when it's not clear from context. If you do the disambiguation, you're not accepting an unexpected X deep inside the conceptual boundaries of what would normally be considered Y; you just have two different conceptual boundaries that both use the symbol "Y".
+
+And the persistent lack of clarity on this point is very obviously motivated. When people like Quillette editor Colin Wright say, "Look, I'm not questioning your 'gender', whatever that is; I'm just saying that this female {locker room/prison/sports league/&c.} is on the basis of sex, not gender", then the people who praise you for writing "... Not Man for the Categories" are not satisfied.
+
+the reason the bitterness is there is because of this situation where our marketplace of ideas is rigged, the marketplace owners know it's rigged, and will admit to it in private, and even in public (as in "Kolmogorov Complicity") if you ask the right way—but the "rationalist" and "EA" brand names and marketing machines are still out there sucking up attention and resources based on claims to excellence that everyone in the know actually _agrees_ are false! (And people are still citing "... Not Man for the Categories" without noticing that the edit-note's admission completely undermines sections IV.–VI.) I spent four years of my life arguing that you were wrong about something in increasing technical detail, and Yudkowsky still says you're ["always right"](https://twitter.com/ESYudkowsky/status/1434906470248636419)! Obviously, I get that it's not your fault that you write faster than everyone else and Yudkowsky is an arrogant prick. But it's still not fair, and it still stings, and I don't think it's Michael fault that I feel this way, even if I used specific framings (marketing machines, &c.) that came from conversations involving him (would it be better if I felt the same way, but were less articulate about why?), and he was my political ally for a couple years. (I haven't even talked to the guy in ten months!! My last experience with him was actually very bad!)
+
+And when Jessica writes a post to talk about her experiences, it seems messed up that you barge in to make it all about Michael, get the top-voted comment (341 points!), and Yudkowsky boosts you as a reliable narrator ("leaving out the information from Scott's comment about a MIRI-opposed person who is [...] causing psychotic breaks in people"), and the voting on the post turns around (it was [at 143 karm in 97 votes](https://archive.md/MJHom), and now it's down to 50 karma in 162 votes). This looks like raw factional conflict to me: Jessica had some negative-valence things to say about mainstream "rationalist" institutions, so the mainstream rationalist leaders move in to discredit her by association.
+
+You know, I also had a sleep-deprivation-induced-psychotic-break-with-hospitalization in February 2013, and shortly thereafter, I remember Anna remarking that I was sounding a lot like Michael. But I hadn't been talking to Michael at all beforehand! (My previous email conversation with him had been in 2010.) So what could Anna's brain have been picking up on, when she said that? My guess: there's some underlying dimension of psychological variation (psychoticism? bipolar?—you tell me; this is supposed to be your professional specialty) where Michael and I were already weird/crazy in similar ways, and sufficiently bad stressors can push me further along that dimension (enough for Anna to notice). Are you also going to blame Yudkowsky for making people autistic?
+
+"A lesson is learned but the damage is irreversible." https://www.alessonislearned.com/
+
+Take all what, specifically?! I briefly told you about a specific incident when Michael and Jessica and Jack were super-mean to me, in response to my perceived negligence during an unusual crisis situation. I do think this should be a negative update about Michael's and Jessica's and Jack's character and judgement. (Even if being super-mean to me would have been justified if it had helped Sasha, I don't think it actually helped or could have been expected to help, and the timescale-of-a-month effects of being super-mean to me turned out to be pretty bad for me because of my preexisting psych vulnerabilities.) That doesn't mean you can assume Sarah is being harmed just by having any social connection to them! (Ask Sarah, if you must!)
+
+I messaged Sarah the next day to talk about what happened. When I mentioned that I was rethinking my earlier vague intention to move to New York to be closer to the Vassarites and to her, "because they (everyone, but Ben being relatively more merciful) are super-quick to escalate the moment their ideology says that you defected first", she replied, "I like em but I could see how being their roommate would be a pain."
+
+[...]
+
+But you know, I also think there are reasons that getting too close to the Valinorians could be bad for someone (in subtler, lower-variance ways). Michael will yell at you and give you terrible drug advice, but at least I can Actually Talk to him; Kelsey will slowly destroy your sanity with superficial "niceness" carefully engineered to block all substantive communication. (Perhaps the best illustration of what I mean is that after typing the previous sentence, I felt an impulse to add "Don't tell Kelsey I said that," whereas Michael you can tell anything. For that, I can forgive a lot. I don't necessarily expect you to understand, but do try to have some empathy that being super-mean over the phone during a crisis is a lesser crime on this side of the trenches than on yours.) In both cases, I want to remain on good terms, without confusing the relationship I actually have with the one I wished I had.
+
+
+Insofar as I'm skeptical of Jessica's narrative, I suspect she's not putting nearly enough emphasis on the acid + weed (!) as a direct causal factor. But I don't think that invalidates everything she has to say about the culture of you guys who aren't us (anymore) and who we don't like (as much anymore). "MIRI's paranoid internal culture did a number on Jessica psychologically, to which there are structural parallels to Zoe's report about Leverage" and "subsequently, trying acid made things much worse" and "subsequently, Michael's recommending weed made things even worse" could all be true at the same time.
+
+I agree that this is a real concern. (I was so enamored with Yudkowsky's philosophy-of-science writing that there was no chance of me bouncing on account of the sexism that I perceived, but I wasn't the marginal case.) There are definitely good reasons to tread carefully when trying to add sensitive-in-our-culture content to the shared map. But I don't think treading carefully should take precedence over getting the goddamned right answer.
+
+For an example of what I think treading-carefully-but-getting-the-goddamned-right-answer looks like, I'm really proud of my review of Charles Murray's 2020 Human Diversity: http://unremediatedgender.space/2020/Apr/book-review-human-diversity/
+I'm definitely not saying, Emil Kirkegaard-style, "the black/white IQ gap is genetic, anyone who denies this is a mind-killed idiot, deal with it." Rather, first I review the Science in the book, and then I talk about the politics surrounding Murray's reputation and the technical reasons for believing that the gap is real and partly genetic, and then I go meta on the problem and explain why it makes sense that political forces make this hard to talk about. I think this is how you go about mapping the territory without being a moral monster with respect to your pre-Dark Enlightenment morality. (And Emil was satisfied, too: https://twitter.com/KirkegaardEmil/status/1425334398484983813)
+
+I think this should just be normal, and I'm willing to fight for a world in which this is normal. When something is hard to talk about, you carefully flag the reasons that it's hard to talk about, and then you say the thing. If you don't get to the part where you say the thing, then your shared map doesn't reflect the territory!!
+
+It's ahistorical to talk about "the rationalist community" in 2007. There was a blog where Eliezer Yudkowsky and Robin Hanson wrote about rationality as a subject matter. No one thought the subject matter of reasoning itself was the exclusive property of any particular social group that could have a "caliph." That would be crazy!
+
+I confess, in my frustration at this, sometimes I indulge in the vice of flashy rhetoric that sounds like "Everyone is lying!" (which is surely false), rather than reaching for something wordier like, "This is an astoundingly hostile information environment!" (which is surely true). The reason it's tempting is because I think the outcome of a sufficiently hostile information environment looks a lot like "Everyone is lying", even if the psychological mechanisms are different.
+
+-----
+
+Discord with Scott December 2019
+
+> Don't answer if it would be too unpleasant, but - I'm interested in asking some questions to test the autogynephilia hypothesis on the next SSC survey. Do you have any suggestions beyond the obvious? Also, do you know any intelligent and friendly opponent of the hypothesis who I could ask the same question to to make sure I'm getting advice from both sides?
+
+20 December
+Hi, Scott! You shouldn't worry about answering being unpleasant for me—speech is thought, and thinking is good! I am actively at war with the socio-psychological forces that make people erroneously think that talking is painful!
+
+-----
+
+zackmdavis — 12/24/2019 2150
+okay, maybe speech is sometimes painful
+the _Less Wrong_ comment I just left you is really mean
+and you know it's not because I don't like you
+you know it's because I'm genuinely at my wit's end
+after I posted it, I was like, "Wait, if I'm going to be this mean to Scott, maybe Christmas Eve isn't the best time?"
+it's like the elephant in my brain is gambling that by being socially aggressive, it can force you to actually process information about philosophy which you otherwise would not have an incentive to
+I hope you have a merry Christmas
+zackmdavis — 12/24/2019
+oh, I guess we're Jewish
+that attenuates the "is a hugely inappropriately socially-aggressive blog comment going to ruin someone's Christmas" fear somewhat
+
+---------
+
+ScottAlexander — 12/25/2019
+Eh, I know we've been over this a thousand times before, but I still feel like we disagree and can't pinpoint the exact place where we diverge and so I'm reduced to vomiting every step of my reasoning process out in the hopes that you say "That step! That's the step I don't like!". Most of the reasoning process will be stuff you know and agree with and I'm sorry.
+
+My confusion is something like - I think of "not trying to replicate God's dictionary" as equivalent to "attempts to design language are a pragmatic process rather than a 100% truth-seeking process". But if you agree that designing language is a partly pragmatic process, then you can't object to appeals to consequence in it, since appealing to consequence is what you're supposed to do in a pragmatic process. I'm sorry if I made that point in a way that insulted your intelligence, but I don't understand how, even though we agree on all the premises, we keep getting opposite conclusions, and I don't have a great solution other than vomiting more words.
+
+Probably you are overestimating my intelligence and ability to understand things, I'm sorry. I continue to regret that something like this has come between us, and I really like you, and if there were something I could do that would prevent us from constantly being at loggerheads and prevent me from constantly offending you and making you feel like I am insulting and misunderstanding you I would, but I just really think I'm right about this, and so far haven't been able to figure out why I might not be, and don't like pretending not to believe something when I do believe it.
+ScottAlexander — 12/25/2019
+And I feel like thanks to you getting involved in the Group It Would Be Paranoid Of Me To Blame All Bad Things In The Rationalist Community On, whenever I try to sort this out with you you accuse me of playing dumb and mentally destabilizing you and ruining your day and whatever, and so I don't do it. I don't know how to convince you that I actually think I'm right and am trying to do the right thing as a rationalist and this isn't all part of an attempt to destroy you.
+zackmdavis — 12/25/2019
+Thanks and merry Christmas 💖 🎅 ; I think I see some productive things I can write that haven't already been written yet that do a better job of pinpointing the step
+ScottAlexander — 12/25/2019
+I reread your Boundary post and I honestly think we disagree on the f**king dolphins.
+I just 100% literally believe that a fish group including dolphins is exactly as good as one that doesn't, whereas it seems like you are taking it as a preliminary obvious step that this is worse.
+Am I understanding you right?
+zackmdavis — 12/25/2019
+oh, that's interesting
+my reply is that the fish group including dolphins is going to be a "thinner" (lower-dimensional) subspace of configuration space
+at least, compared to some alternatives
+ScottAlexander — 12/25/2019
+Imagine a superdolphin that has had 100 million more years to evolve to live in the sea and converge to sea-optimal, so that it is identical to fish in every way (eg scales, cold-blooded, etc) but still has tiny bits of mammalian DNA that indicate it is evolutionary-relationship a mammal. Do you agree that this could reasonably be classified as a fish?
+zackmdavis — 12/25/2019
+yes, but real dolphins are predictably not superdolphins. Again, I use the configuation space visual metaphor for everything: if the superdolphin cluster overlaps with the fish cluster along all dimensions except the "these-and-such tiny regions of the genome" axis, then I want to put superdolphins and fish in the same category (unless I have some special reason to care about those-and-such tiny regions of the genome), but the reason evolutionary relatedness is probably going to be a thicker/more-robust subspace because DNA is the "root" of the causal graph that produces morphology &c.
+ScottAlexander — 12/25/2019
+Huh, it seems plausible to me that the number of characteristics dolphins share with fish (fluked tails, smooth bodies, fins, same approximate shape, live in water) is larger than the number they share with mammals (tiny irrelevant hairs, bears live young but I think some fish do too, warm-blooded but so are some fish)
+Does it seem to you that somewhere between dolphins (which you classify as obviously mammals) and superdolphins (which you classify as plausibly fish) there's a broad zone where reasonable people could disagree about whether the animal should be classified as a mammal or fish?
+zackmdavis — 12/25/2019
+in principle, yes, but the tendency I'm at war with is people saying, "Whelp, categories can't be wrong, so there's nothing stopping me from using pragmatic considerations like politeness as a tiebreaker", which seems like a memetic-superweapon for ignoring any category difference you don't like as long as you can find some superficial similarity in some subspace (sorry for the linear algebra jargon; this is actually how I think about it)
+"Categories can't be wrong; language is pragmatic" is an appeal-to-arbitrariness conversation-halter (https://www.lesswrong.com/posts/wqmmv6NraYv4Xoeyj/conversation-halters), when the correct move is to figure out which subspace is more relevant to your goals
+ScottAlexander — 12/25/2019
+Forget what tendencies we're at war with for now, I want to figure out where we actually differ.
+zackmdavis — 12/25/2019
+thanks, that's right
+ScottAlexander — 12/25/2019
+I'm thinking one possible crux is something like "you believe thingspace is a literal space with one correct answer, even though puny human intelligences cannot perfectly map it, whereas I believe thingspace is a vague metaphor that at best we can agree on a few things that would be obviously true about it". Is that actually a crux of ours?
+zackmdavis — 12/25/2019
+it's getting close
+it's not a literal space, but
+ScottAlexander — 12/25/2019
+Or, like, if dolphins were at the exact spot between real dolphins and superdolphins where reasonable people (including all the smartest people) disagreed whether they were mammals or fish, do you feel like there would still be one correct answer that they just weren't smart enough to converge on?
+zackmdavis — 12/25/2019
+no
+ScottAlexander — 12/25/2019
+Huh.
+zackmdavis — 12/25/2019
+maybe the intuition-generating difference (not a crux, but the difference in intellectual backgrounds that generates cruxes) is that I'm not trying to think of "reasonable people", I'm doing AI theory
+ScottAlexander — 12/25/2019
+Okay, then well-designed AIs who could do other things right and hadn't been programmed to specifically grind an axe on this one point.
+zackmdavis — 12/25/2019
+different AIs would use different category systems depending on their goals, depending on which variables they cared about being able to predict/control
+ScottAlexander — 12/25/2019
+Huh, then I think maybe that is our crux. I feel like there are a bunch of easy ways to solve this for AIs, like "use two different categories to represent sea-adaptedness and evolutionary-descent as soon as you realize there's going to be an issue". Or "figure out why you personally care about dolphins and only use the category that reflects why you care about them". I feel like it's only when we live in a society of hard-to-coordinate humans who already have a word and various connotations around it and who are all pursuing different goals that this becomes a problem.
+zackmdavis — 12/25/2019
+Right! And like (sorry if this is stepping back into the memetic-warfare thing for a moment which is unhealthy, because we actually trust each other enough to do philosophy), my first reaction to "Against Lie Inflation" was agreeing with the post, and feeling rage that you obviously wouldn't let Jessica get away with saying, "An alternative categorization system is not an error, and borders are not objectively true or false, therefore you can't object to me defining lying this way"
+And then only after I read it more closely and zeroed in on the paragraph about "This definition would make people angrier", was I less angry at you, because that meant we actually had different views on how linguistic pragmatism worked and you weren't just making an unprincipled exception for trans people
+ScottAlexander — 12/25/2019
+Just to make sure I understand this - you were annoyed I wouldn't let Jessica get away with unprincipled pragmatism because you thought that was something I had supported before and I was being a hypocrite, not because you agree with unprincipled pragmatism.
+zackmdavis — 12/25/2019
+right
+I agreed with the post, but was angry at the hypocrisy
+(percieved hypocrisy)
+you could argue that I'm being incredibly unfair for having an axe to grind over something you wrote five years ago (I've written lots of things five years ago that were wrong, and it would be pretty annoying to get angry mail from people wanting to argue about every single one)
+It must suck being famous 😢
+ScottAlexander — 12/25/2019
+No, I agree if I'm wrong about this or hypocritical I want to know.
+I guess my claim is something like "language should be used pragmatically to achieve goals, but there is actually a best way to use language pragmatically to achieve goals in any given context".
+I think I also am just much more likely than you to think any given clusters in thingspace are in the vague "is a red egg a rube or a blegg" category where thingspace has nothing further to tell us and we have to solve it practically. I was really surprised you thought there was a right answer to the dolphin problem.
+zackmdavis — 12/25/2019
+the reason I accuse you of being motivatedly dumb is because I know you know about strategic equivocation, motte-and-bailey, the worst argument in the world, because you taught everyone about it
+and when I say, "Hey, doesn't this also apply to 'trans women are women'", you act like you don't get it
+and that just seems implausible
+
+ScottAlexander — 12/25/2019
+Can you give a specific example of that? What's coming to mind is someone saying "Women have uteruses," I say "Sounds right", someone else saying "Caitlyn Jenner is a woman", I say "I'm committed to agreeing with that", and them saying "Therefore Caitlyn Jenner has a uterus", and me saying "Well, wow". Obviously that doesn't work, can you give an example of strategic equivocation in this space that does?
+(I'm not saying there isn't anything, just that it's not immediately coming to mind which may be a flaw in my own generators)
+zackmdavis — 12/25/2019
+(yes, one moment to dig up a link from my notes)
+things like ... The Nation, a nationally prominent progressive magazine writes, "There is another argument against allowing trans athletes to compete with cis-gender athletes that suggests that their presence hurts cis-women and cis-girls. But this line of thought doesn’t acknowledge that trans women are in fact women." https://www.thenation.com/article/trans-runner-daily-caller-terry-miller-andraya-yearwood-martina-navratilova/
+ScottAlexander — 12/25/2019
+I agree that this is stupid and wrong and a natural consequence of letting people use language the way I am suggesting.
+zackmdavis — 12/25/2019
+this is where the memetic warfare thing comes in; I don't think it's fair to ordinary people to go as deep into the philosophy-of-language weeds as I can before they're allowed to object to this
+ScottAlexander — 12/25/2019
+I think my argument would be something like "the damage from this is less than the potential damage of trans people feeling lots more gender dysphoria". I think the part of your Boundaries post that I marked as a likely crux for that is "Everything we identify as a joint is a joint not 'because we care about it', but because it helps us think about the things we care about" but I didn't really follow - if you agree this is our crux, could you explain it in more detail?
+zackmdavis — 12/25/2019
+I think "pragmatic" reasons to not just use the natural clustering that you would get by impartially running the clustering algorithm on the subspace of configuration space relevant to your goals, basically amount to "wireheading" and "war" (cont'd)
+If we actually had magical sex change technology of the kind described in https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions, no one would even consider clever philosophy arguments about how to redefine words: people who wanted to change sex would just do it, and everyone else would use the corresponding language, not as a favor, but because it straightforwardly described reality
+ScottAlexander — 12/25/2019
+(agreed so far, will let you continue)
+zackmdavis — 12/25/2019
+and similarly, I don't want to call Jessica a man, but that's because her transition actually worked; the ordinary means by which my brain notices people's secondary sex characteristics and assigns "man"/"woman"/"not sure" and decides which pronoun to use has been successfully "fooled". If it looks like a duck, and quacks like a duck, and you can model it as a duck without making any grievious prediction errors, then it makes sense to call it a "duck" within the range of circumstances in which that model continues to perform well, even if someone considering different circumstances (looking at a "thicker" subspace of configuration space) would insist that they need to model it as a duck-oid robot or that that species is actually a goose
+ScottAlexander — 12/25/2019
+(are you going to continue, or is that the end?)
+zackmdavis — 12/25/2019
+um, I have thousands of words like this in my notes
+ScottAlexander — 12/25/2019
+I guess I asked because I don't understand how you supported the "either wireheading or war" claim (which I just realized you might have thought I agreed with, I only meant I agreed with the instant sex change thing), and I don't understand how this answered my question.
+zackmdavis — 12/25/2019
+right, "wireheading" insofar as if I were to transition today and I didn't pass as well as Jessica, and everyone felt obligated to call me a woman, they would be wireheading/benevolently-gaslighting me
+making me think my transition were successful, even though it actually wasn't
+that's ... not actually a nice thing to do to a rationalist
+ScottAlexander — 12/25/2019
+Like, my claim is that transgender people have some weird thing going on in their head that gives them an electric shock if you refer to them as their birth gender. The thing isn't an agent and so anti-extortion rules don't apply. So you consider the harms of electric shocking someone a bunch of times, vs. the harms of remembering to say a different word than the one that automatically comes to mind (which is the same problem I face everytime I don't tell a stupid person FUCK YOU YOU ARE AN IDIOT, and isn't a huge cognitive burden), and accept The Nation making specious arguments which it would probably do about something else anyway, and overall you come out ahead.
+I think negative connotations of wireheading are doing a lot of the work there? If I change the metaphor to "give a depressed person a good antidepressant that works for them", I don't think it changes the situation at all but it definitely changes the valence.
+zackmdavis — 12/25/2019
+so, I claim to know a lot of things about the etiology of trans about why I don't think the electric shock is inevitable, but I don't want the conversation to go there if it doesn't have to, because I don't have to ragequit the so-called rationalist community over a complicated emprical thing; I'm only required to ragequit over bad philosophy
+ScottAlexander — 12/25/2019
+Fair.
+zackmdavis — 12/25/2019
+I think the philosophy crux is likely to be something like: in order to do utilitarianism, you need to have accurate maps/models, so that you can compute what the best thing to do is. If you let utilitarian considerations corrupt our models of the world, then you don't actually maximize utility; you're just gaslighting/wireheading yourself (I know, negative connotations of those words)
+You might think you can change language without changing models, but "37 Words Ways Can Be Wrong" explicitly and at length explains why you can't
+ScottAlexander — 12/25/2019
+I agree with that, but I feel like I'm proposing something that may be a completely-delimited 0.0001% decrease in world model understanding for a rather substantial increase in utility. Do you disagree with my numbers, or with the claim that it's ever acceptable to make that kind of a tradeoff at all?
+zackmdavis — 12/25/2019
+Where did you get that 0.0001% from?!?!!
+It's not 0.0001% for me (I've been in constant psychological pain for three years over this), but maybe there aren't enough copies of me for the utilitarian calculus to care
+ScottAlexander — 12/25/2019
+I currently call trans people by their self-identified gender and don't feel like my world-model has changed much.
+(are you saying it hasn't been a 0.0001% utility decrease for you, or a 0.0001% world-model-clarity decrease?)
+Hm, I only want to debate this if it were our actual crux, do you feel like it is?
+zackmdavis — 12/25/2019
+empirical magnitude of trans is not a crux
+ScottAlexander — 12/25/2019
+So getting back to my point that I feel like I'm making a very lucrative utility vs world-model-clarity tradeoff, do you think you should never do that regardless of how good the numbers are?
+(I actually think I would agree with you on this if I felt like it was even a noticeable world-model-clarity decrease, or if I thought it had the potential to snowball in terms of my Parable of Lightning, it just doesn't even really register as a clarity decrease to me)
+zackmdavis — 12/25/2019
+... uh, what sex people are is pretty relevant to human social life
+much more so than whether lightning comes before thunder
+ScottAlexander — 12/25/2019
+Yeah, but anyone who cares about it routes around it.
+zackmdavis — 12/25/2019
+so, the problem with the fictional "thunder comes before lightning" regime is that they didn't choose the utilitarian-optimal truth to declare heresy???
+ScottAlexander — 12/25/2019
+Like I'm straight and I don't date transwomen. If for some reason I got "tricked" into dating transwomen by linguistic philosophy (not by lack of knowledge of who was biologically female or not) then I wouldn't regard this as a failure, I would regard this as linguistic philosophy changing my sexual orientation.
+zackmdavis — 12/25/2019
+http://unremediatedgender.space/2019/Dec/more-schelling/
+ScottAlexander — 12/25/2019
+I think their problem was that they actually made people ignorant of things. I don't feel like anyone is being made ignorant of anything by the transgender thing, their thoughts are just following a slightly longer path.
+zackmdavis — 12/25/2019
+I have a paragraph in my notes about this, one moment
+The "national borders" metaphor is particularly galling if—[unlike](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/) [the](https://slatestarcodex.com/2013/06/30/the-lottery-of-fascinations/) Popular Author—you actually know the math.
+Slate Star Codex
+Scott Alexander
+The Parable Of The Talents
+[Content note: scrupulosity and self-esteem triggers, IQ, brief discussion of weight and dieting. Not good for growth mindset.] I. I sometimes blog about research into IQ and human intelligence. I …
+Image
+Slate Star Codex
+Scott Alexander
+The Lottery of Fascinations
+I. Suppose I were to come out tomorrow as gay. I have amazing and wonderful friends, and I certainly wouldn’t expect them to hate me forever or tell me to burn in Hell or anything like that. …
+Image
+If I have a "blegg" concept for blue egg-shaped objects—uh, this is [our](https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries) [standard](https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories) [example](https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside), just [roll with it](http://unremediatedgender.space/2018/Feb/blegg-mode/)—what that means is that (at some appropriate level of abstraction) there's a little [Bayesian network](https://www.lesswrong.com/posts/hzuSDMx7pd2uxFc5w/causal-diagrams-and-causal-models) in my head with "blueness" and "eggness" observation nodes hooked up to a central "blegg" category-membership node, such that if I see a black-and-white photograph of an egg-shaped object, I can use the observation of its shape to update my beliefs about its blegg-category-membership, and then use my beliefs about category-membership to update my beliefs about its blueness. This cognitive algorithm is useful if we live in a world where objects that have the appropriate statistical structure—if the joint distribution P(blegg, blueness, eggness) approximately factorizes as P(blegg)·P(blueness|blegg)·P(eggness|blegg).
+Disguised Queries - LessWrong 2.0
+Imagine that you have a peculiar job in a peculiar factory: Your task is to take
+objects from a mysterious conveyor belt, and sort the objects into two bins.
+When you first arrive, Susan the Senior Sorter explains to you that blue
+egg-shaped objects are called "bleggs" and go...
+Image
+Neural Categories - LessWrong 2.0
+In Disguised Queries [/lw/nm/disguised_queries/], I talked about a
+classification task of "bleggs" and "rubes". The typical blegg is blue,
+egg-shaped, furred, flexible, opaque, glows in the dark, and contains vanadium.
+The typical rube is red, cube-shaped, smooth, hard, trans...
+Image
+How An Algorithm Feels From Inside - LessWrong 2.0
+"If a tree falls in the forest, and no one hears it, does it make a sound?" I
+remember seeing an actual argument get started on this subject—a fully naive
+argument that went nowhere near Berkeleyan subjectivism. Just:
+
+"It makes a sound, just like any other falling tree!"
+"...
+Image
+Causal Diagrams and Causal Models - LessWrong 2.0
+Suppose a general-population survey shows that people who exercise less, weigh
+more. You don't have any known direction of time in the data - you don't know
+which came first, the increased weight or the diminished exercise. And you
+didn't randomly assign half the population t...
+Image
+ScottAlexander — 12/25/2019
+I'm still interested in whether you think, if I were correct about the extent of the minimal inconvenience and high utility gain, I would be correct to have the position I do.
+zackmdavis — 12/25/2019
+"Category boundaries" are just a visual metaphor for the math: the set of things I'll classify as a blegg with probability greater than p is conveniently visualized as an area with a boundary in blueness–eggness space. If you don't understand the relevant math and philosophy—or are pretending not to understand only and exactly when it's politically convenient—you might think you can redraw the boundary any way you want, but you can't, because the "boundary" visualization is derived from a statistical model which corresponds to empirically testable predictions about the real world. Fucking with category boundaries corresponds to fucking with the model, which corresponds to fucking with your ability to interpret sensory data. The only two reasons you could possibly want to do this would be to wirehead yourself (corrupt your map to make the territory look nicer than it really is, making yourself feel happier at the cost of sabotaging your ability to navigate the real world) or as information warfare (corrupt shared maps to sabotage other agents' ability to navigate the real world, in a way such that you benefit from their confusion).
+ScottAlexander — 12/25/2019
+Now we're getting back to the stupid dolphin thing.
+I don't want to accuse you of thinking God has a dictionary, but it sure sounds like you're saying that God has pretty strong opinions on which categories are more natural than others.
+...actually, that's unfair, God does have some opinions, just not as deterministic as you think.
+zackmdavis — 12/25/2019
+I mean, yes? Thingspace is really high-dimensional; the multivariate distributions for females and males are actually different distributions, even if there's a lot of overlap along many of them; your brain is going to want to use this information and want to use language to communicate this information; although maybe an alien surveying earth to decide which rocks to harvest wouldn't bother noticing the difference, because it's only looking at rocks and not the differences between apes https://www.lesswrong.com/posts/cu7YY7WdgJBs3DpmJ/the-univariate-fallacy
+ScottAlexander — 12/25/2019
+Also, wait a second, you wouldn't want Jessica competing in a woman's marathon either, so your own categorization system doesn't solve this problem any better than mine.
+zackmdavis — 12/25/2019
+I'm willing to code-switch depending on what I'm talking about
+I definitely want to say "Jessica ... she"
+because of how my visual system makes that call
+ScottAlexander — 12/25/2019
+Do you agree that gendering trans people based on whether they pass or not doesn't work on a society-wide level?
+zackmdavis — 12/25/2019
+unfortunately, yes; that's why this situation is such a horrible, tragic mess; this is where game theory comes in: http://unremediatedgender.space/2019/Oct/self-identity-is-a-schelling-point/
+ScottAlexander — 12/25/2019
+But it sounds like you still disagree with self-identity?
+zackmdavis — 12/25/2019
+I'm not trying to do policy; I'm trying to get the theory right, in the hopes that getting the theory right will help people make better decisions
+ScottAlexander — 12/25/2019
+....but it not being a stable Schelling point is one example of why it would be a bad decision
+I kind of am trying to do policy, in that what seems important to me is whether myself (and the government, and the society whose norms I try to obey insofar as they're socially-just game theoretic things) should gender trans people one way or another.
+If this reflects badly on AI theory, I would explain to AI designers why there are considerations in favor of them doing things differently.
+Which I think would be pretty easy.
+zackmdavis — 12/25/2019
+... you would have AI designers take into account what makes trans people feel better?
+rather than capturing the statistical structure of the world beneath the world?
+ScottAlexander — 12/25/2019
+...I think you may be taking the opposite point I intended?
+zackmdavis — 12/25/2019
+oh
+ScottAlexander — 12/25/2019
+I am trying to (pursue the self-identification criterion) because it's the right thing for me to do right now as a human individual for various reasons. If it's bad AI design, I would want AI designers to do something different.
+(though I would hope a superintelligent AI would gender trans people based on self-identification for the same reasons I would)
+zackmdavis — 12/25/2019
+if you're going to be a generic Berkeley progressive, that makes sense, but if you're going to be a rationalist, then it doesn't make sense, because the entire point of our community is to import insights from AI theory to make humans smarter
+ScottAlexander — 12/25/2019
+Wait, no, the entire point of our community is to do the correct thing!
+I worry I'm interpreting you wrong, because it sounds like you're saying "You shouldn't eat food, because an AI wouldn't eat food. You should connect yourself to a wall socket."
+No, that's not right, "You shouldn't eat food because you wouldn't want an AI designer to program their AI to eat food"
+zackmdavis — 12/25/2019
+okay, my previous message is not phrased well; if I were writing a long form rather than live Discord I would have rewritten it to be less mockable
+ScottAlexander — 12/25/2019
+Fair.
+zackmdavis — 12/25/2019
+Here's another couple paragraphs from my memoir draft:
+zackmdavis — 12/25/2019
+A friend—call her ["Erin Burr"](https://genius.com/7888863)—tells me that I'm delusional to expect so much from "the community", that the original vision never included tackling politically sensitive subjects. (I remember Erin recommending Paul Graham's ["What You Can't Say"](http://www.paulgraham.com/say.html) back in 'aught-nine, with the suggestion to take Graham's advice to figure out what you can't say, and then don't say it.)
+Perhaps so. But back in 2009, we did not anticipate that whether or not I should cut my dick off would become a politicized issue.
+To be fair, it's not obvious that I shouldn't cut my dick off! A lot of people seem to be doing it nowadays, and a lot of them seem pretty happy! But in order to decide whether to join them, I need accurate information. I need an honest accounting of the costs and benefits of transition, so that I can cut my dick off in the possible worlds where that's a good idea, and not cut my dick off in the possible worlds where it's not a good idea.
+ScottAlexander — 12/25/2019
+But I should do the thing which is morally and rationally correct right now, not the thing that will create good results if AI designers inexplicably program a tendency to do the exact same thing into their AI, in a stupid way that changes the entire architecture.
+zackmdavis — 12/25/2019
+Your position seems to be, "It's okay to distort our models, because the utilitarian benefit to this-and-such neurotype interest group makes it a net win", but I'm claiming that I'm in the neurotype interest group and I want non-distorted models
+I don't want people to have to doublethink around their perceptions of me
+Jessica doesn't want that, either
+ScottAlexander — 12/25/2019
+I can see why you think that, but I don't identify with that belief, partly because there are certain ways I wouldn't distort my model. The trans thing seems so much like using a different word, rather than distorting a model, that it's really hard for me to care.
+I definitely don't want to make it impossible to accurately view the territory, but this seems more like (forgive a totally different metaphor than the one I've used before) rebranding Mt. McKinley to Denali on my map, with everyone knowing that these are the same mountain.
+zackmdavis — 12/25/2019
+the entire thesis of "37 Ways Words Can Be Wrong" is that "I can use a different word without distorting my model" isn't true as a matter of human psychology
+if we learned something new about how language works in the last 10 years, that would be really interesting
+ScottAlexander — 12/25/2019
+Yet I feel like I would come to all the same conclusions about men and women as you would.
+zackmdavis — 12/25/2019
+when System 2 knows what the answer your questioner expects is
+ScottAlexander — 12/25/2019
+I think I mean "about the territory", which I think is independent of what my questioner expects.
+zackmdavis — 12/25/2019
+Here's someone who's not part of our philosophy cult who understands the point I'm making: https://fairplayforwomen.com/pronouns/
+author compares preferred pronouns to a Stroop test
+ScottAlexander — 12/25/2019
+The whole interesting thing about Stroop tests is that you can't do them if you try to do them super-quickly, but you don't end up fundamentally confused about what colors are.
+I agree if you gave me an IAT about trans people and pronouns, I would do worse than you (probably)
+But IAT doesn't correlate with actual racism, and I think this is the same kind of situation.
+zackmdavis — 12/25/2019
+If you're sufficiently clever and careful and you remember how language worked when Airstrip One was still Britain, then you can express things using Newspeak, but a culture in which Newspeak is mandatory, and all of Oceania's best philosophers have clever arguments for why Newspeak doesn't distort people's beliefs ... doesn't seem like a nice place to live, right?
+Doesn't seem like a culture that can solve AI alignment, right?
+Zvi wrote about a pattern where people claim that "Everybody knows" something, being motivatedly used as an excuse to silence people trying to point out the thing (because they don't see people behaving as if it were common knowledge) https://thezvi.wordpress.com/2019/07/02/everybody-knows/
+"'Everybody knows' our kind of trans women are sampled from the male multivariate distribution rather than the female multivariate distribution, why are you being a jerk and pointing this out"
+I really don't think everyone knows
+I think the people who sort-of know are being intimidated into doublethinking around what they sort-of know
+I think this is bad for clarity
+ScottAlexander — 12/25/2019
+That is why I keep stressing that this is the only thing I know of that works this way. It's not opening the door to total confusion. If we randomized half the people at OpenAI to use trans pronouns one way, and the other half to use it the other way, do you think they would end up with significantly different productivity?
+zackmdavis — 12/25/2019
+(let me think for a few moments about how to answer that one)
+ScottAlexander — 12/25/2019
+I actually want to go to the Christmas party today. Any interest in also coming and we can continue the discussion there?
+zackmdavis — 12/25/2019
+at Valinor? I've been avoiding because I haven't gotten my flu shot yet because I've been too caught up in the rationalist civil war to have any executive funcitoning left over
+ScottAlexander — 12/25/2019
+No, at Event Horizon.
+zackmdavis — 12/25/2019
+sure, remind me the address?
+(Thanks so much for your time; it would be really exciting if we could prevent a rationalist civil war; I think this "you need accurate models before you can do utilitarianism" general philosophy point is also at the root of Ben Hoffman's objections to EA movement)
+ScottAlexander — 12/25/2019
+(actually, I just saw something on the party thing saying "Ask us before you invite anyone else", so give me a second to ask them)
+I agree that the principle "It's important to have good models because otherwise we can't do anything at all" is a principle. But it seems to me, like other principles, as something that's finite and quantifiable, as opposed to "if you sacrifice cognitive categorization efficiency even 0.01%, everything will be terrible forever and you'll fail at AI". In the same way that "make a desperate effort" is a principle, but I would still balance this against burnout or something, I feel like "never do anything that messes with cognitive categorization efficiency" should be followed but not off a cliff. I still don't feel like I understand whether our crux is absolutism vs. practicality, you just think this is legitimately a much bigger deal than I do, or something downstream of the dolphin problem where you think categories are much less inherently ambiguous than I do.
+Like, suppose Benya Fallenstein gets angry at us for constantly misgendering them and leaves MIRI. Is that better or worse than whatever epistemic clarity we gain from having a better split-second Stroop reaction to gender issues?
+zackmdavis — 12/25/2019
+Eliezer has this pedagogy thing where he'll phrase things in an absolute manner because that's the only way to get people to take things seriously, even if it's not literally absolute in human practice
+thus, "you can't define a word any way you want", https://www.lesswrong.com/posts/WLJwTJ7uGPA5Qphbp/trying-to-try, &c.
+ScottAlexander — 12/25/2019
+(it's possibly unfair for me to point to Benya, and I'm definitely not trying to become even more pragmatic than I was a moment ago talking about being nice to people, I'm trying to head off a problem where AI is so much more important than people's feelings that we end up Pascaling ourselves)
+zackmdavis — 12/25/2019
+maybe all three of those are partial cruxes
+like, when you say "trans is the only issue for which this comes up, it isn't a big deal for me" ... you realize that I do have to functionally treat that "part of an attempt to destroy [me]". You don't want to destroy me, but you're willing to accept it as a side-effect
+ScottAlexander — 12/25/2019
+I think I can predict that you would say something like that, but it still doesn't make sense to me.
+I think I'm modeling you as having the weird electric shock thing, but it fires opposite as for everyone else.
+But I'm pretty sure that's not fair.
+Obviously I will address you by whatever pronouns you prefer and so on, but yeah, I acknowledge that if trying to help everyone else hurts you, then I am going to hurt you.
+zackmdavis — 12/25/2019
+it might be related! I think I have the same neuro-quirk that leads to the trans women we know, but I got socialized differently by my reading
+ScottAlexander — 12/25/2019
+Part of what I'm trying to do is convince you not to care about this.
+My real position is "all of this is dumb, go through life doing things without caring what your gender is", and so I model some sort of weird electric shock thing that prevents people from doing this.
+I feel like because you are at least saying that what you're doing is based on principles, maybe if we can converge on principles I can help you.
+zackmdavis — 12/25/2019
+I read https://www.lesswrong.com/posts/QZs4vkC7cbyjL9XA9/changing-emotions 10 years ago and it had a huge effect on me
+ScottAlexander — 12/25/2019
+But I am also trying really hard not try to help you directly because I think that made you feel a lot worse the last time we talked than just talking philosophy of language.
+I think a meta-position of mine is something like "people fail to converge on object-level issues for unclear reasons, so try to figure out a set of norms that let people with unconverged object-level positions continue to cooperate", but I feel like you're fighting me on that one too.
+zackmdavis — 12/25/2019
+the "category absolutism" sub-crux seems legitimately important to the Great Common Task
+the "you just think this is legitimately a much bigger deal than I do" sub-crux isn't
+the extent to which dolphins are ambiguous is
+ScottAlexander — 12/25/2019
+But it sounds like you're not a 100% Category Absolutist, and you would be willing to let a few tiny things through, and I worry our real difference is I think this is tiny enough to ignore, and you don't.
+zackmdavis — 12/25/2019
+I think your self-report of "this is tiny enough to ignore" is blatantly false (Jessica would say you're lying, but I tend to avoid that usage of "lie" except sometimes when talking to Jessica et al.)
+ScottAlexander — 12/25/2019
+As in I'm misdescribing my experience, or failing to notice ways that my experience is unrepresentative of broader concerns?
+zackmdavis — 12/25/2019
+I'm saying that your brain notices when trans women don't pass, and this affects your probabilistic anticipations about them; your decisions towards them, &c.; when you find out that a passing trans women is trans, then also affects your probabilistic anticipations, &c.
+ScottAlexander — 12/25/2019
+Plausibly yes.
+zackmdavis — 12/25/2019
+This is technically consistent with "tiny enough to ignore" if you draw the category boundaries of "tiny" and "ignore" the right away in order to force that sentence to be true
+... you see the problem; if I take the things you wrote in "Categories Were Made" literally, then I can make any sentence true by driving a truck through the noncentral fallacy
+ScottAlexander — 12/25/2019
+I want to think about that statement more, but they've approved my invitation of you to the party. 2412 Martin Luther King, I'll see you there.
+zackmdavis — 12/25/2019
+cool, thanks so much; I really want to avoid a civil war
+ScottAlexander — 12/25/2019
+I also want to debate the meta-question of whether we should have a civil war about this, which I think is much easier to answer and resolve than the object-level one.
with internet available—
-_ https://www.lesswrong.com/posts/QB9eXzzQWBhq9YuB8/rationalizing-and-sitting-bolt-upright-in-alarm#YQBvyWeKT8eSxPCmz
-_ Ben on "community": http://benjaminrosshoffman.com/on-purpose-alone/
-_ check date of univariate fallacy Tweet and Kelsey Facebook comment
-_ Soares "excited" example
-_ EA Has a Lying Problem
_ when did I ask Leon about getting easier tasks?
-_ Facebook discussion with Kelsey (from starred email)?: https://www.facebook.com/nd/?julia.galef%2Fposts%2F10104430038893342&comment_id=10104430782029092&aref=1557304488202043&medium=email&mid=5885beb3d8c69G26974e56G5885c34d38f3bGb7&bcode=2.1557308093.AbzpoBOc8mafOpo2G9A&n_m=main%40zackmdavis.net
-_ "had previously written a lot about problems with Effective Altruism": link to all of Ben's posts
-_ Sarah Barellies cover links
-_ "watchful waiting"
-_ Atlantic article on "My Son Wears Dresses" https://archive.is/FJNII
-_ in "especially galling" §: from "Changing Emotions"—"somehow it's always about sex when men are involved"—he even correctly pinpointing AGP in ordinary men (as was obvious back then), just without the part that AGP _is_ "trans"
-_ "look at what ended up happening"—look up whether that exact quote from from http://www.hpmor.com/chapter/47 or https://www.hpmor.com/chapter/97
-_ Discord history with Scott (leading up to 2019 Christmas party, and deferring to Tailcalled on SSC survey question wording)
-_ Gallileo "And yet it moves"
-_ Discord logs before Austin retreat
_ examples of snarky comments about "the rationalists"
-_ screenshot Rob's Facebook comment which I link
_ 13th century word meanings
_ compile Categories references from the Dolphin War Twitter thread
_ weirdly hostile comments on "... Boundaries?"
_ more examples of Yudkowsky's arrogance
-_ "The Correct Contrarian Cluster" and race/IQ
-_ taqiyya
-_ refusing to give a probability (When Not to Use Probabilities? Shut Up and Do the Impossible?)
-_ retrieve comment on pseudo-lies post in which he says its OK for me to comment even though
-
far editing tier—
> I unfortunately have had a policy for over a decade of not putting numbers on a few things, one of which is AGI timelines and one of which is *non-relative* doom probabilities. Among the reasons is that my estimates of those have been extremely unstable.
+https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible
+> You might even be justified in [refusing to use probabilities](https://www.lesswrong.com/lw/sg/when_not_to_use_probabilities/) at this point. In all honesty, I really _don't_ know how to estimate the probability of solving an impossible problem that I have gone forth with intent to solve; in a case where I've previously solved some impossible problems, but the particular impossible problem is more difficult than anything I've yet solved, but I plan to work on it longer, etcetera.
+>
+> People ask me how likely it is that humankind will survive, or how likely it is that anyone can build a Friendly AI, or how likely it is that I can build one. I really _don't_ know how to answer. I'm not being evasive; I don't know how to put a probability estimate on my, or someone else, successfully shutting up and doing the impossible. Is it probability zero because it's impossible? Obviously not. But how likely is it that this problem, like previous ones, will give up its unyielding blankness when I understand it better? It's not truly impossible, I can see that much. But humanly impossible? Impossible to me in particular? I don't know how to guess. I can't even translate my intuitive feeling into a number, because the only intuitive feeling I have is that the "chance" depends heavily on my choices and unknown unknowns: a wildly unstable probability estimate.
+
+
+
+
I don't, actually, know how to prevent the world from ending. Probably we were never going to survive. (The cis-human era of Earth-originating intelligent life wasn't going to last forever, and it's hard to exert detailed control over what comes next.) But if we're going to die either way, I think it would be _more dignified_ if Eliezer Yudkowsky were to behave as if he wanted his faithful students to be informed. Since it doesn't look like we're going to get that, I think it's _more dignified_ if his faithful students _know_ that he's not behaving like he wants us to be informed. And so one of my goals in telling you this long story about how I spent (wasted?) the last six years of my life, is to communicate the moral that
and that this is a _problem_ for the future of humanity, to the extent that there is a future of humanity.
https://www.lesswrong.com/posts/jAToJHtg39AMTAuJo/evolutions-are-stupid-but-work-anyway?commentId=HvGxrASYAyfbiPwQt#HvGxrASYAyfbiPwQt
> I've noticed that none of my heroes, not even Douglas Hofstadter or Eric Drexler, seem to live up to my standard of perfection. Always sooner or later they fall short. It's annoying, you know, because it means _I_ have to do it.
-But he got it right in 2009; he only started to fall short _later_ for political reasons
\ No newline at end of file
+But he got it right in 2009; he only started to fall short _later_ for political reasons
+
+https://twitter.com/ESYudkowsky/status/1580278376673120256
+> Your annual reminder that academically conventional decision theory, as taught everywhere on Earth except inside the MIRI-adjacent bubble, says to give in to threats in oneshot games. Only a very rare student is bright enough to deserve blame in place of the teacher.
+
+https://www.lesswrong.com/posts/9KvefburLia7ptEE3/the-correct-contrarian-cluster
+> Atheism: Yes.
+> Many-worlds: Yes.
+> "P-zombies": No.
+>
+> These aren't necessarily simple or easy for contrarians to work through, but the correctness seems as reliable as it gets.
+>
+> Of course there are also slam-dunks like:
+>
+> Natural selection: Yes.
+> World Trade Center rigged with explosives: No.
+
+I wonder how the history of the site would have been different if this had included "Racial differences in cognitive abilities: Yes." (It's worse if he didn't think about it in the first place, rather than noticing and deciding not to say it—it doesn't even seem to show up in the comments!!)
+
+
+https://www.facebook.com/yudkowsky/posts/pfbid0tTk5VoLSxZ1hJKPRMdzpPzNaBR4eU5ufKEhvvowMFTjKTHykogFfwAZge9Kk5jFLl
+> Yeah, see, *my* equivalent of making ominous noises about the Second Amendment is to hint vaguely that there are all these geneticists around, and gene sequencing is pretty cheap now, and there's this thing called CRISPR, and they can probably figure out how to make a flu virus that cures Borderer culture by excising whatever genes are correlated with that and adding genes correlated with greater intelligence. Not that I'm saying anyone should try something like that if a certain person became US President. Just saying, you know, somebody might think of it.
+
+
+commenting policy—
+> I will enforce the same standards here as I would on my personal Facebook garden. If it looks like it would be unhedonic to spend time interacting with you, I will ban you from commenting on my posts.
+>
+> Specific guidelines:
+>
+> Argue against ideas rather than people.
+> Don't accuse others of committing the Being Wrong Fallacy ("Wow, I can't believe you're so wrong! And you believe you're right! That's even more wrong!").
+> I consider tone-policing to be a self-fulfilling prophecy and will delete it.
+> If I think your own tone is counterproductive, I will try to remember to politely delete your comment instead of rudely saying so in a public reply.
+> If you have helpful personal advice to someone that could perhaps be taken as lowering their status, say it to them in private rather than in a public comment.
+> The censorship policy of the Reign of Terror is not part of the content of the post itself and may not be debated on the post. If you think Censorship!! is a terrible idea and invalidates discussion, feel free not to read the comments section.
+> The Internet is full of things to read that will not make you angry. If it seems like you choose to spend a lot of time reading things that will give you a chance to be angry and push down others so you can be above them, you're not an interesting plant to have in my garden and you will be weeded. I don't consider it fun to get angry at such people, and I will choose to read something else instead.
+
+I do wonder how much of his verbal report is shaped by pedagogy (& not having high-quality critics). People are very bad at imagining how alien aliens would be! "Don't try to hallucinate value there; just, don't" is simpler than working out exactly how far to push cosmopolitanism
+
+
+couldn't resist commenting even after I blocked Yudkowsky on Twitter (30 August 2021)
+https://www.facebook.com/yudkowsky/posts/pfbid02AGzw7EzeB6bDAwvXT8hm4jnC4Lh1te7tC3Q3h2u6QqBfJjp4HKvpCM3LqvcLuXSbl?comment_id=10159857276789228&reply_comment_id=10159858211759228
+Yudkowsky replies (10 September 2021)—
+> Zack, if you can see this, I think Twitter is worse for you than Facebook because of the short-reply constraint. I have a lot more ability to include nuance on Facebook and would not expect most of my statements here to set you off the same way, or for it to be impossible for me to reply effectively if something did come up.
+("impossible to me to reply effectively" implies that I have commenting permissions)
+
+
+"Noble Secrets" Discord thread—
+> So, I agree that if you perform the experimental test of asking people, "Do you think truthseeking is virtuous?", then a strong supermajority will say "Yes", and that if you ask them, "And are you looking for memes about how to do actually do it?" they'll also say "Yes."
+>
+> But I also notice that in chat the other day, we saw this (in my opinion very revealing) paragraph—
+>
+> I think of "not in other people" [in "Let the truth destroy what it can—but in you, not in other people"] not as "infantilizing", but as recognizing independent agency. You don't get to do harm to other people without their consent, whether that is physical or pychological.
+>
+> My expectation of a subculture descended from the memetic legacy of Robin Hanson's blog in 2008 in which people were _actually_ looking for memes about how to seek truth, is that the modal, obvious response to a paragraph like this would be something like—
+>
+>> Hi! You must be new here! Regarding your concern about truth doing harm to people, a standard reply is articulated in the post "Doublethink (Choosing to be Biased)" (<https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased>). Regarding your concern about recognizing independent agency, a standard reply is articulated in the post "Your Rationality Is My Business" (<https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business>).
+>
+> —or _something like that_. Obviously, it's not important that the reply use those particular Sequences links, or _any_ Sequences links; what's important is that someone responds to this _very obvious_ anti-epistemology (<https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology>) with ... memes about how to actually do truthseeking.
+>
+> And what we _actually_ saw in response to this "You don't get to do harm to other people" message is ... it got 5 :plus_one: reactions.
+
+Yudkowsky replies—
+> the Doublethink page is specifically about how you yourself choosing not to know is Unwise
+> to the extent you can even do that rather than convincing yourself that you have
+> it specifically doesn't say "tell other people every truth you know"
+> the point is exactly that you couldn't reasonably end up in an epistemic position of knowing yourself that you ought to lie to yourself
+
+
+--------
+
+My last messages in late-November fight with Alicorner Discord was 4:44 a.m./4:47 am. (!) on 28 November; I mention needing more sleep. But—that would have been 6:44 a.m. Austin time? Did I actually not sleep that night? Flipping out and writing Yudkowsky was evening of the same calendar day.
+
+Sam had said professed gender was more predictive.
+
+Bobbi has claimed that "most people who speak the involved dialect of English agree that ‘woman’ refers to ‘an individual who perceives themselves as a woman’"
+
+Kelsey 27 November
+> I think you could read me as making the claim "it's desirable, for any social gender, for there to be non-medical-transition ways of signaling it"
+
+27 November
+> I don't think linta was saying "you should believe ozy doesn't have a uterus"
+that would be really weird
+
+> well, for one thing, "it's okay to not pursue any medical transition options while still not identifying with your asab" isn't directed at you, it's directed at the trans person
+My reply—
+> that's almost worse; you're telling them that it's okay to gaslight _everyone else in their social circle_
+
+1702 27 November
+> Stepping back: the specific statement that prompted me to start this horrible discussion even though I usually try to keep my personal hobbyhorse out of this server because I don't want it to destroy my friendships, was @lintamande's suggestion that "it's okay to not pursue any medical transition options while still not identifying with your asab". I think I have a thought experiment that might clarify why I react so strongly to this sort of thing
+> Suppose Brent Dill showed you this photograph and said, "This is a photograph of a dog. Your eyes may tell you that it's a cat, but you have to say it's a dog, or I'll be very unhappy and it'll be all your fault."
+> In that case, I think you would say, "This is a gaslighting attempt. You are attempting to use my sympathy for you to undermine my perception of reality."
+
+> Flight about to take off so can't explain, but destroying the ability to reason in public about biological sex as a predictive category seems very bad for general sanity, even if freedom and transhumanism is otherwise good
+
+https://discord.com/channels/401181628015050773/458329253595840522/516744646034980904
+26 November 14:38 p.m.
+> I'm not sure what "it's okay to not pursue any medical transition options while still not identifying with your asab" is supposed to mean if it doesn't cash out to "it's okay to enforce social norms preventing other people from admitting out loud that they have correctly noticed your biological sex"
+