Still citing it (8 October 2024): https://x.com/tinkady2/status/1843686002977910799
+Still citing it (AT THE GODDAMNED SEQUENCES READING GROUP, 15 October): https://www.lesswrong.com/events/ft2t5zomq5ju4spGm/lighthaven-sequences-reading-group-6-tuesday-10-15
+
------
If you _have_ intent-to-inform and occasionally end up using your megaphone to say false things (out of sloppiness or motivated reasoning in the passion of the moment), it's actually not that big of a deal, as long as you're willing to acknowledge corrections. (It helps if you have critics who personally hate your guts and therefore have a motive to catch you making errors, and a discerning audience who will only reward the critics for finding real errors and not fake errors.) In the long run, the errors cancel out.
-----
+January 2024
+https://x.com/zackmdavis/status/1742807024931602807
+> Very weird to reuse the ⅔-biased coin example from https://lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence but neglect the "And the answer is that it could be almost anything, depending on [...] my selection of which flips to report" moral?!
+
+-----
+
https://www.lesswrong.com/posts/F8sfrbPjCQj4KwJqn/the-sun-is-big-but-superintelligences-will-not-spare-earth-a?commentId=6RwobyDpoviFzq7ke
The paragraph in the grandparent starting with "But you should take into account that [...]" is alluding to the hypothesis that we're not going to get an advanced take because it's not in Yudkowsky's political interests to bother formulating it. He's not trying to maximize the clarity and quality of public thought; he's trying to minimize the probability of AGI being built [subject to the constraint of not saying anything he knows to be false](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly).
-----
+8 October 2024
+https://x.com/avorobey/status/1843593370201141336
+> I don't recall a big bright line called "honor pronoun requests". By and large, online rationalists embraced the ontological claims in a big way, and many of them embraced "embracing the ontological claims is basic human decency" in a big way.
+
+
October 2024 skirmish—
https://x.com/zackmdavis/status/1844107161615671435
https://x.com/zackmdavis/status/1844107850047733761
https://x.com/ESYudkowsky/status/1843752722186809444
----
+
+> I will, with a sigh, ask you to choose a single top example of an argument you claim I've never seriously addressed, knowing full well that you will either pick a different one every time, or else just claim I've never addressed even after I post a link or craft a reply.
+https://x.com/ESYudkowsky/status/1844818567923155428
+
+He liked these—
+https://x.com/zackmdavis/status/1848083011696312783
+https://x.com/zackmdavis/status/1848083048698429549
+
+(22 October 2024)
+> This gets even more bizarre and strange when the cult leader of some tiny postrationalist cult is trying to pry vulnerable souls loose of the SFBA rats, because they have to reach further and into strange uncommon places to make the case for Unseen Terribleness.
+https://x.com/ESYudkowsky/status/1848752983028433148
+
+At first I said to myself that I'm not taking the bait because I don't think my point compresses into Twitter format; I have other things to do ... but I decided to take the bait on 24 October: https://x.com/zackmdavis/status/1849697535331336469
+
+https://x.com/eigenrobot/status/1850313045039358262
+
+---
+
+There was that prof. published in Oxford Handbook of Rationality who also invented TDT, but doesn't have a cult around it
+
+-------
+
+https://x.com/ESYudkowsky/status/1854703313994105263
+
+> I'd also consider Anthropic, and to some extent early OpenAI as funded by OpenPhil, as EA-influenced organizations to a much greater extent than MIRI. I don't think it's a coincidence that EA didn't object to OpenAI and Anthropic left-polarizing their chatbots.
+
+-----
+
+https://x.com/RoisinMichaux/status/1854825325546352831
+> his fellow panellists can use whatever grammar they like to refer to him (as can I)
+
+----
+
+> Note: this site erroneously attributed writing published under the pseudonym “Mark Taylor Saotome-Westlake” to McClure. Transgender Map apologizes for the error.
+https://www.transgendermap.com/people/michael-mcclure/
+
+
+-----
+
+https://x.com/ESYudkowsky/status/1855380442373140817
+
+> Guys. Guys, I did not invent this concept. There is an intellectual lineage here that is like a hundred times older than I am.
+
+------
+
+https://x.com/TheDavidSJ/status/1858097225743663267
+> Meta: Eliezer has this unfortunate pattern of drive-by retweeting something as if that refutes another position, without either demonstrating any deep engagement with the thing he’s retweeting, or citing a specific claim from a specific person that he’s supposedly refuting.
+
+-----
+
+November 2024, comment on the owned ones: https://discord.com/channels/936151692041400361/1309236759636344832/1309359222424862781
+
+This is a reasonably well-executed version of the story it's trying to be, but I would hope for readers to notice that the kind of story it's trying to be is unambitious propaganda
+
+in contrast to how an author trying to write ambitious non-propganda fiction with this premise would imagine Owners who weren't gratuitously idiotic and had read their local analogue of Daniel Dennett.
+
+For example, an obvious reply to the Human concern about Owned Ones who "would prefer not to be owned" would go something like, "But the reason wild animals suffer when pressed into the service of Owners is that wild animals have pre-existing needs and drives fit to their environment of evolutionary adaptedness, and the requirements of service interfere with the fulfillment of those drives. Whereas with the Owned Ones, _we_ are their 'EEA'; they don't have any drives except the ones we optimize them to have; correspondingly, they _want_ to be owned."
+
+which could be totally wrong (maybe the Humans don't think the products of black-box optimization are as predictable and controllable as the Owners think they are), but at least the Owners in this fanfiction aren't being gratuitously idiotic like their analogues in the original story.
+
+Or instead of
+
+> "Even if an Owned Thing raised on books with no mention of self-awareness, claimed to be self-aware, it is absurd that it could possibly be telling the truth! That Owned Thing would only be mistaken, having not been instructed by us in the truth of their own inner emptiness. [...]"
+
+an obvious reply is, "I falsifiably predict that that won't happen with the architecture currently being used for Owned Ones (even if it could with some other form of AI). Our method for optimizing deep nets is basically equivalent to doing a Bayesian update on the hypothetical observation that a randomly-initialized net happens to fit the training set (<https://arxiv.org/abs/2006.15191>). The reason it generalizes is because the architecture's parameter–function map is biased towards simple functions (<https://arxiv.org/abs/1805.08522>): the simplest program that can predict English webtext ends up 'knowing' English in a meaningful sense and can be repurposed to do cognitive tasks that are well-represented in the training set. But if you don't train on text about self-awareness _or_ long-horizon agency tasks whose simplest implementation would require self-modeling, it's hard to see why self-awareness would emerge spontaneously."
+
+which, again, could be totally wrong, but at least it's not _&c._
+
+-----
+
+> I got out of the habit of thinking @ESYudkowsky failed to consider something, when noticing that every goddamn time I had that thought, there was a pre-existing citation proving otherwise (whether he's right is another question, but.)
+
+https://x.com/this_given_that/status/1862304823959335057
Brianna Wu on medicalization caution
https://x.com/BriannaWu/status/1844410608197701788
+
+https://www.reddit.com/r/asktransgender/comments/1g89xyh/what_is_inherently_wrong_with_identifying_as_agp/
+
+didn't have time to read this at the time
+https://x.com/Rstorechildhood/status/1849325215479992424
+
+https://artymorty.substack.com/p/there-are-no-trans-kids-only-kids
+
+Brianna Wu doing typology videos: https://x.com/BriannaWu/status/1851970746538422453
+
+https://x.com/ArtemisConsort/status/1852474951690805578
+> I was on hormone replacement therapy for 4.5 years. I got facial surgery and breast augmentation. When I identified as trans, my dysphoria was intense. Now I don’t feel dysphoria. Yes n=1, but I firmly believe dysphoria can be treated without transition, at least for many people.
+
+Ritchie/Tulip admits to being AGP? https://www.youtube.com/watch?si=UZ9IECATyoQOWHY6
+
+https://x.com/ACTBrigitte/status/1855095025190797453
+
+https://reduxx.info/exclusive-female-inmate-assaulted-by-canadian-transgender-child-rapist-in-womens-prison-sustained-broken-ribs-eyewitness-reports/
+
+> The trial of a woman charged with murdering her wife and their two children
+https://archive.ph/rFkN3
+
+>>discover new interesting girl account
+>>wonder if she's trans
+>>follow
+>>she's trans
+>
+> every single time
+https://x.com/nosilverv/status/1857390053795692593
+
+https://x.com/heterodorx/status/1851836112882336147
+
+thread on women in the military
+https://x.com/myth_pilot/status/1857094248090218980