From 7e244a0b9747be7cd4b150c10146af0659e56a65 Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Thu, 28 Nov 2024 20:06:46 -0800 Subject: [PATCH] check in --- ...-not-a-drop-in-replacement-for-concepts.md | 2 + notes/memoir-sections.md | 86 +++++++++++++++++++ notes/notes.txt | 34 ++++++++ notes/tech_tasks.txt | 1 + 4 files changed, 123 insertions(+) diff --git a/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md b/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md index cb654e0..091a789 100644 --- a/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md +++ b/content/drafts/prediction-markets-are-not-a-drop-in-replacement-for-concepts.md @@ -164,4 +164,6 @@ One could construe Keltham's line of questioning as deliberately trying to play I just don't _buy it_. Almost everywhere else in the dath ilan mythos that dath ilan is compared to Earth (_i.e._, the real world) or Golarion, the comparison is unflattering; we're supposed to believe that dath ilan is a superior civilization, a utopia of reason where average intelligence is 2.6 standard deviations higher, where everyone is trained in Bayesian reasoning from childhood. One of the rare places in canon that dath ilan is depicted as not having already thought of something good and useful in the real world is in [the April Fool's Day confession](https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession), when [NGDP targeting](https://en.wikipedia.org/wiki/Nominal_income_target) is identified as a clever and characteristically un–dath ilani hack. Dath ilan is accustomed to solving coordination problems by the effort of "serious people [...] get[ting] together and try[ing] not to have them be so bad": the mode of thinking that would lead one to propose automatically canceling out the sticky wage effect by printing more money to keep spending constant is alien to them. +[TODO: Yudkowsky's comment throws doubt on this interpretation; it looks like Keltham doesn't get it because it lookks like Yudkowsky doesn't get it. It's hard to believe that the lack of prediction markets in particular is what makes the IDF bad] + Anti-discrimination norms are like NGDP targeting: prohibiting certain probabilistic inferences in order to cancel out widespread irrational bigotry is similar to printing money to cancel out a widespread irrational tendency to fire workers instead of lowering nominal wages in that it's not something you would think of in a world where people are just doing decision Bayesian decision theory—and it's not something you would _portray as superior_ if you came from a world that prides itself on just doing Bayesian decision theory and were trying to enlighten the natives of a strange and primitive culture. Yudkowsky's reply to "Comment on a Scene" tries to patch the problem by suggesting that Civilization doesn't need to make those probabilistic inferences anyway because it has prediction markets, but this is an obvious rationalization. (If you disagree, I have an amazing new sorting algorithm that may interest you ...) diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index a98736a..8bc2663 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1339,6 +1339,8 @@ https://twitter.com/niplav_site/status/1744304380503904616 Still citing it (8 October 2024): https://x.com/tinkady2/status/1843686002977910799 +Still citing it (AT THE GODDAMNED SEQUENCES READING GROUP, 15 October): https://www.lesswrong.com/events/ft2t5zomq5ju4spGm/lighthaven-sequences-reading-group-6-tuesday-10-15 + ------ If you _have_ intent-to-inform and occasionally end up using your megaphone to say false things (out of sloppiness or motivated reasoning in the passion of the moment), it's actually not that big of a deal, as long as you're willing to acknowledge corrections. (It helps if you have critics who personally hate your guts and therefore have a motive to catch you making errors, and a discerning audience who will only reward the critics for finding real errors and not fake errors.) In the long run, the errors cancel out. @@ -2965,6 +2967,12 @@ https://x.com/ESYudkowsky/status/1837532991989629184 ----- +January 2024 +https://x.com/zackmdavis/status/1742807024931602807 +> Very weird to reuse the ⅔-biased coin example from https://lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence but neglect the "And the answer is that it could be almost anything, depending on [...] my selection of which flips to report" moral?! + +----- + https://www.lesswrong.com/posts/F8sfrbPjCQj4KwJqn/the-sun-is-big-but-superintelligences-will-not-spare-earth-a?commentId=6RwobyDpoviFzq7ke The paragraph in the grandparent starting with "But you should take into account that [...]" is alluding to the hypothesis that we're not going to get an advanced take because it's not in Yudkowsky's political interests to bother formulating it. He's not trying to maximize the clarity and quality of public thought; he's trying to minimize the probability of AGI being built [subject to the constraint of not saying anything he knows to be false](https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly). @@ -2976,6 +2984,11 @@ https://x.com/ESYudkowsky/status/1841828620043530528 ----- +8 October 2024 +https://x.com/avorobey/status/1843593370201141336 +> I don't recall a big bright line called "honor pronoun requests". By and large, online rationalists embraced the ontological claims in a big way, and many of them embraced "embracing the ontological claims is basic human decency" in a big way. + + October 2024 skirmish— https://x.com/zackmdavis/status/1844107161615671435 https://x.com/zackmdavis/status/1844107850047733761 @@ -2997,3 +3010,76 @@ Crowley linked to me https://x.com/ESYudkowsky/status/1843752722186809444 ---- + +> I will, with a sigh, ask you to choose a single top example of an argument you claim I've never seriously addressed, knowing full well that you will either pick a different one every time, or else just claim I've never addressed even after I post a link or craft a reply. +https://x.com/ESYudkowsky/status/1844818567923155428 + +He liked these— +https://x.com/zackmdavis/status/1848083011696312783 +https://x.com/zackmdavis/status/1848083048698429549 + +(22 October 2024) +> This gets even more bizarre and strange when the cult leader of some tiny postrationalist cult is trying to pry vulnerable souls loose of the SFBA rats, because they have to reach further and into strange uncommon places to make the case for Unseen Terribleness. +https://x.com/ESYudkowsky/status/1848752983028433148 + +At first I said to myself that I'm not taking the bait because I don't think my point compresses into Twitter format; I have other things to do ... but I decided to take the bait on 24 October: https://x.com/zackmdavis/status/1849697535331336469 + +https://x.com/eigenrobot/status/1850313045039358262 + +--- + +There was that prof. published in Oxford Handbook of Rationality who also invented TDT, but doesn't have a cult around it + +------- + +https://x.com/ESYudkowsky/status/1854703313994105263 + +> I'd also consider Anthropic, and to some extent early OpenAI as funded by OpenPhil, as EA-influenced organizations to a much greater extent than MIRI. I don't think it's a coincidence that EA didn't object to OpenAI and Anthropic left-polarizing their chatbots. + +----- + +https://x.com/RoisinMichaux/status/1854825325546352831 +> his fellow panellists can use whatever grammar they like to refer to him (as can I) + +---- + +> Note: this site erroneously attributed writing published under the pseudonym “Mark Taylor Saotome-Westlake” to McClure. Transgender Map apologizes for the error. +https://www.transgendermap.com/people/michael-mcclure/ + + +----- + +https://x.com/ESYudkowsky/status/1855380442373140817 + +> Guys. Guys, I did not invent this concept. There is an intellectual lineage here that is like a hundred times older than I am. + +------ + +https://x.com/TheDavidSJ/status/1858097225743663267 +> Meta: Eliezer has this unfortunate pattern of drive-by retweeting something as if that refutes another position, without either demonstrating any deep engagement with the thing he’s retweeting, or citing a specific claim from a specific person that he’s supposedly refuting. + +----- + +November 2024, comment on the owned ones: https://discord.com/channels/936151692041400361/1309236759636344832/1309359222424862781 + +This is a reasonably well-executed version of the story it's trying to be, but I would hope for readers to notice that the kind of story it's trying to be is unambitious propaganda + +in contrast to how an author trying to write ambitious non-propganda fiction with this premise would imagine Owners who weren't gratuitously idiotic and had read their local analogue of Daniel Dennett. + +For example, an obvious reply to the Human concern about Owned Ones who "would prefer not to be owned" would go something like, "But the reason wild animals suffer when pressed into the service of Owners is that wild animals have pre-existing needs and drives fit to their environment of evolutionary adaptedness, and the requirements of service interfere with the fulfillment of those drives. Whereas with the Owned Ones, _we_ are their 'EEA'; they don't have any drives except the ones we optimize them to have; correspondingly, they _want_ to be owned." + +which could be totally wrong (maybe the Humans don't think the products of black-box optimization are as predictable and controllable as the Owners think they are), but at least the Owners in this fanfiction aren't being gratuitously idiotic like their analogues in the original story. + +Or instead of + +> "Even if an Owned Thing raised on books with no mention of self-awareness, claimed to be self-aware, it is absurd that it could possibly be telling the truth! That Owned Thing would only be mistaken, having not been instructed by us in the truth of their own inner emptiness. [...]" + +an obvious reply is, "I falsifiably predict that that won't happen with the architecture currently being used for Owned Ones (even if it could with some other form of AI). Our method for optimizing deep nets is basically equivalent to doing a Bayesian update on the hypothetical observation that a randomly-initialized net happens to fit the training set (). The reason it generalizes is because the architecture's parameter–function map is biased towards simple functions (): the simplest program that can predict English webtext ends up 'knowing' English in a meaningful sense and can be repurposed to do cognitive tasks that are well-represented in the training set. But if you don't train on text about self-awareness _or_ long-horizon agency tasks whose simplest implementation would require self-modeling, it's hard to see why self-awareness would emerge spontaneously." + +which, again, could be totally wrong, but at least it's not _&c._ + +----- + +> I got out of the habit of thinking @ESYudkowsky failed to consider something, when noticing that every goddamn time I had that thought, there was a pre-existing citation proving otherwise (whether he's right is another question, but.) + +https://x.com/this_given_that/status/1862304823959335057 diff --git a/notes/notes.txt b/notes/notes.txt index c8ca0c3..71fd808 100644 --- a/notes/notes.txt +++ b/notes/notes.txt @@ -3483,3 +3483,37 @@ https://x.com/heterodorx/status/1840728215091818991 Brianna Wu on medicalization caution https://x.com/BriannaWu/status/1844410608197701788 + +https://www.reddit.com/r/asktransgender/comments/1g89xyh/what_is_inherently_wrong_with_identifying_as_agp/ + +didn't have time to read this at the time +https://x.com/Rstorechildhood/status/1849325215479992424 + +https://artymorty.substack.com/p/there-are-no-trans-kids-only-kids + +Brianna Wu doing typology videos: https://x.com/BriannaWu/status/1851970746538422453 + +https://x.com/ArtemisConsort/status/1852474951690805578 +> I was on hormone replacement therapy for 4.5 years. I got facial surgery and breast augmentation. When I identified as trans, my dysphoria was intense. Now I don’t feel dysphoria. Yes n=1, but I firmly believe dysphoria can be treated without transition, at least for many people. + +Ritchie/Tulip admits to being AGP? https://www.youtube.com/watch?si=UZ9IECATyoQOWHY6 + +https://x.com/ACTBrigitte/status/1855095025190797453 + +https://reduxx.info/exclusive-female-inmate-assaulted-by-canadian-transgender-child-rapist-in-womens-prison-sustained-broken-ribs-eyewitness-reports/ + +> The trial of a woman charged with murdering her wife and their two children +https://archive.ph/rFkN3 + +>>discover new interesting girl account +>>wonder if she's trans +>>follow +>>she's trans +> +> every single time +https://x.com/nosilverv/status/1857390053795692593 + +https://x.com/heterodorx/status/1851836112882336147 + +thread on women in the military +https://x.com/myth_pilot/status/1857094248090218980 diff --git a/notes/tech_tasks.txt b/notes/tech_tasks.txt index 611a0be..f51440b 100644 --- a/notes/tech_tasks.txt +++ b/notes/tech_tasks.txt @@ -1,3 +1,4 @@ +double slashes in rendered source?! social media share card https://dev.to/mishmanners/how-to-add-a-social-media-share-card-to-any-website-ha8 relative URLs = False (but not broken) make Markdown native footnote CSS match the plugin-based footnotes -- 2.17.1