From e6561a23dc5a16f2b34950fb024c0e12712be83e Mon Sep 17 00:00:00 2001 From: "Zack M. Davis" Date: Thu, 2 Nov 2023 18:18:39 -0700 Subject: [PATCH] check in --- content/drafts/zevis-choice.md | 10 +++-- notes/epigraph_quotes.md | 4 -- notes/memoir-sections.md | 68 +++++++++++----------------------- notes/memoir_wordcounts.csv | 10 ++++- 4 files changed, 38 insertions(+), 54 deletions(-) diff --git a/content/drafts/zevis-choice.md b/content/drafts/zevis-choice.md index a55c05d..2daec4d 100644 --- a/content/drafts/zevis-choice.md +++ b/content/drafts/zevis-choice.md @@ -5,13 +5,17 @@ Category: commentary Tags: autogynephilia, bullet-biting, cathartic, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors Status: draft +> In desperation he quoted André Gide's remark: "It has all been said before, but you must say it again, since nobody listens." Unfortunately, judging by the quotations given here, Gide's remark is still relevant even today. +> +> —Neven Sesardic, _Making Sense of Heritability_ + ... except, I would be remiss to condemn Yudkowsky without discussing—potentially mitigating factors. (I don't want to say that whether someone is a fraud should depend on whether there are mitigating factors—rather, I should discuss potential reasons why being a fraud might be the least-bad choice, when faced with a sufficiently desperate situation.) So far, I've been writing from the perspective of caring (and expecting Yudkowsky to care) about human rationality as a cause in its own right—about wanting to _make sense_, and wanting to live in a Society that made sense, for its own sake, and not as a convergently instrumental subgoal of saving the world. -That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science blogging was just that good. I did do a little bit of work for the Singularity Institute back in the day (an informal internship in 'aught-nine, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic. +That's pretty much always where I've been at. I _never_ wanted to save the world. I got sucked in to this robot cult because Yudkowsky's philsophy-of-science blogging was just that good. I did do a little bit of work for the Singularity Institute back in the day (an informal internship in 2009, some data-entry-like work manually adding Previous/Next links to the Sequences, designing several PowerPoint presentations for Anna, writing some Python scripts to organize their donor database), but that was because it was my social tribe and I had connections. To the extent that I took at all seriously the whole save/destroy/take-over the world part (about how we needed to encode all of human morality into a recursively self-improving artificial intelligence to determine our entire future light cone until the end of time), I was scared rather than enthusiastic. -Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was _not my problem_, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which [featured](http://zackmdavis.net/blog/2013/03/religious/) [futurist](http://zackmdavis.net/blog/2013/04/prodrome/)-[themed](http://zackmdavis.net/blog/2013/05/relativity/) [delusions](http://zackmdavis.net/blog/2013/05/relevance/), I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not. +Okay, being scared was entirely appropriate, but what I mean is that I was scared, and concluded that shaping the Singularity was not my problem, as contrasted to being scared, then facing up to the responsibility anyway. After a 2013 sleep-deprivation-induced psychotic episode which [featured](http://zackmdavis.net/blog/2013/03/religious/) [futurist](http://zackmdavis.net/blog/2013/04/prodrome/)-[themed](http://zackmdavis.net/blog/2013/05/relativity/) [delusions](http://zackmdavis.net/blog/2013/05/relevance/), I wrote to Anna, Michael, and some MIRI employees who had been in my contacts for occasional contract work, that "my current plan [was] to just try to forget about _Less Wrong_/MIRI for a long while, maybe at least a year, not because it isn't technically the most important thing in the world, but because I'm not emotionally stable enough think about this stuff anymore" (Subject: "to whom it may concern"). When I got a real programming job and established an income for myself, I [donated to CfAR rather than MIRI](http://zackmdavis.net/blog/2016/12/philanthropy-scorecard-through-2016/), because public rationality was something I could be unambiguously enthusiastic about, and doing anything about AI was not. At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, more emotionally-stable people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl\[ing\] only upon the portion of the activism that would flow to \[his\] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI specifically, I was still _helping_, a good citizen according to the morality of my tribe. @@ -25,7 +29,7 @@ These days, it's increasingly looking like making really large neural nets ... [ [^second-half]: In an unfinished slice-of-life short story I started writing _circa_ 2010, my protagonist (a supermarket employee resenting his job while thinking high-minded thoughts about rationality and the universe) speculates about "a threshold of economic efficiency beyond which nothing human could survive" being a tighter bound on future history than physical limits (like the heat death of the universe), and comments that "it imposes a sense of urgency to suddenly be faced with the fabric of your existence coming apart in ninety years rather than 1090." - But if ninety years is urgent, what about ... nine? Looking at what deep learning can do in 2023, the idea of Singularity 2032 doesn't seem self-evidently _absurd_ in the way that Singularity 2019 seemed absurd in 2010 (correctly, as it turned out). + But if ninety years is urgent, what about ... nine? Looking at what deep learning can do in 2023, the idea of Singularity 2032 doesn't seem self-evidently absurd in the way that Singularity 2019 seemed absurd in 2010 (correctly, as it turned out). My AlphaGo moment was 5 January 2021, when OpenAI released [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of [that week in January 2021](https://en.wikipedia.org/wiki/January_6_United_States_Capitol_attack)). Previous AI milestones, like GANs for a _fixed_ image class, were easier to dismiss as clever statistical tricks. If you have thousands of photographs of people's faces, I didn't feel surprised that some clever algorithm could "learn the distribution" and spit out another sample; I don't know the _details_, but it doesn't seem like scary "understanding." DALL-E's ability to _combine_ concepts—responding to "an armchair in the shape of an avacado" as a novel text prompt, rather than already having thousands of examples of avacado-chairs and just spitting out another one of those—viscerally seemed more like "real" creativity to me, something qualitatively new and scary.[^qualitatively-new] diff --git a/notes/epigraph_quotes.md b/notes/epigraph_quotes.md index a44b2b8..85e0a80 100644 --- a/notes/epigraph_quotes.md +++ b/notes/epigraph_quotes.md @@ -348,10 +348,6 @@ https://xkcd.com/1942/ > > —_The Fountainhead_ by Ayn Rand -> In desperation he quoted André Gide's remark: "It has all been said before, but you must say it again, since nobody listens." Unfortunately, judging by the quotations given here, Gide's remark is still relevant even today. -> -> —Neven Sesardic, _Making Sense of Heritability_ - > _I don't care about what all the others say > Well, I guess there are some things that will just never go away, I > Wish that I could say that there's no better place than home diff --git a/notes/memoir-sections.md b/notes/memoir-sections.md index f0e483e..51cee3c 100644 --- a/notes/memoir-sections.md +++ b/notes/memoir-sections.md @@ -1,36 +1,7 @@ -first edit pass bookmark: "I received a Twitter DM from" - -blocks to fit somewhere— -_ the hill he wants to die on (conclusion for "Zevi's Choice"??) -_ Tail vs. Bailey / Davis vs. Yudkowsky analogy (new block somewhere—or a separate dialogue post??) -_ mention that "Not Man for the Categories" keeps getting cited +first edit pass bookmark: (top of pt. 5) pt. 3 edit tier— -✓ fullname Taylor and Hoffman at start of pt. 3 -✓ footnote clarifying that "Riley" and Sarah weren't core members of the group, despite being included on some emails? -✓ be more specific about Ben's anti-EA and Jessica's anti-MIRI things -✓ weird that Kelsey thought the issue was that we were trying to get Yudkowsky to make a statement -✓ set context for Anna on first mention in the post -✓ more specific on "mostly pretty horrifying" and group conversation with the whole house -✓ cut words from the "Yes Requires" slapfight? -✓ cut words from "Social Reality" scuffle -✓ examples of "bitter and insulting" comments about rationalists -✓ Scott got comas right in the same year as "Categories" -✓ "I" statements -✓ we can go stronger than "I definitely don't think Yudkowsky thinks of himself -✓ cut words from December 2019 blogging spree -✓ mention "Darkest Timeline" and Skyrms somewhere -✓ "Not the Incentives"—rewrite given that I'm not shielding Ray -✓ the skeptical family friend's view -✓ in a footnote, defend the "cutting my dick off" rhetorical flourish -✓ "admit that you are in fact adding a bit of autobiography to the memoir." -✓ anyone who drives at all -✓ clarify that can't remember details: "everyone else seemed to agree on things" -✓ "It's not just talking hypothetically, it is specifically calling a bluff, the point of the hypothetical" -✓ the history of civil rights -✓ it's not obvious why you can't recommend "What You Can't Say" -✓ Ben on "locally coherent coordination" don't unattributedly quote - +_ footnote on the bad-faith condition on "My Price for Joining" _ "it might matter timelessly" → there are people with AI chops who are PC (/2017/Jan/from-what-ive-tasted-of-desire/) _ confusing people and ourselves about what the exact crime is _ footnote explaining quibbles on clarification @@ -38,7 +9,6 @@ _ FTX validated Ben's view of EA!! ("systematically conflating corruption, accum _ "your failure to model social reality is believing people when they claim noble motives" _ hint at Vanessa being trans _ quote Jack on timelines anxiety - _ do I have a better identifier than "Vassarite"? _ maybe I do want to fill in a few more details about the Sasha disaster, conditional on what I end up writing regarding Scott's prosecution?—and conditional on my separate retro email—also the Zolpidem thing _ link to protest flyer @@ -52,15 +22,20 @@ _ cut words from descriptions of other posts! (if people want to read them, they _ try to clarify Abram's categories view (Michael didn't get it) (but it still seems clear to me on re-read?) _ explicitly mention http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/ _ meeting with Ray (maybe?) +_ friends with someone on an animal level, like with a dog pt. 4 edit tier— +_ body odors comment _ mention Nick Bostrom email scandal (and his not appearing on the one-sentence CAIS statement) -_ revise and cut words from "bad faith" section since can link to "Assume Bad Faith" - -_ everyone *who matters* prefers to stay on the good side +_ if he wanted to, I'm sure Eliezer Yudkowsky could think of some relevant differences (I should explain) +_ emphasize that 2018 thread was policing TERF-like pronoun usage, not just disapproving of gender-based pronouns _ if you only say good things about Republican candidates +_ to-be-continued ending about how being a fraud might be a good idea +_ cite more sneers; use a footnote to pack in as many as possible +_ Litany Against Gurus, not sure humans can think and trust at the same time; High Status and Stupidity pt. 5 edit tier— +_ Previously-on summary _ quote specific exchange where I mentioned 10,000 words of philosophy that Scott was wrong—obviously the wrong play _ "as Soares pointed out" needs link _ can I rewrite to not bury the lede on "intent doesn't matter"? @@ -81,7 +56,12 @@ _ clarify that Keltham infers there are no mascochists, vs. Word of God _ "Doublethink" ref in Xu discussion should mention that Word of God Eliezerfic clarification that it's not about telling others _ https://www.greaterwrong.com/posts/vvc2MiZvWgMFaSbhx/book-review-the-bell-curve-by-charles-murray/comment/git7xaE2aHfSZyLzL _ cut words from January 2020 Twitter exchange (after war criminal defenses) - +_ "Not Man for the Categories" keeps getting cited +_ the hill he wants to die on +_ humans have honor instead of TDT. "That's right! I'm appealing to your honor!" +_ Leeroy Jenkins Option +_ historical non-robot-cult rationality wisdom +_ Meghan Murphy got it down to four words things to discuss with Michael/Ben/Jessica— _ Anna on Paul Graham @@ -91,7 +71,6 @@ _ Michael's SLAPP against REACH (new) _ Michael on creepy and crazy men (new) _ elided Sasha disaster (new) - pt. 3–5 prereaders— _ paid hostile prereader (first choice: April) _ Iceman @@ -184,11 +163,11 @@ _ Anna's claim that Scott was a target specifically because he was good, my coun _ Yudkowsky's LW moderation policy far editing tier— +_ You cannot comprehend it, I cannot comprehend it _ screenshot key Tweet threads (now that Twitter requires you to log in) _ Caliphate / craft and the community _ colony ship happiness lie in https://www.lesswrong.com/posts/AWaJvBMb9HGBwtNqd/qualitative-strategies-of-friendliness _ re being fair to abusers: thinking about what behavior pattern you would like to see, generally, by people in your situation, instead of privileging the perspective and feelings of people who've made themselves vulnerable to you by transgressing against you -_ body odors comment _ worry about hyperbole/jumps-to-evaluation; it destroys credibility _ "Density in Thingspace" comment _ Christmas with Scott: mention the destruction of "voluntary"? @@ -207,6 +186,7 @@ _ Brian Skyrms?? _ mr-hire and pre-emptive steelmanning (before meeting LW mods) _ is the Glowfic author "Lintamande ... they" or "Lintamande ... she"? _ explain plot of _Planecrash_ better +_ everyone *who matters* prefers to stay on the good side _ CfAR's AI pivot?? _ example of "steelman before criticize" norm _ explain mods protect-feelings @@ -334,6 +314,10 @@ Friend of the blog Ninety-Three has been nagging me about my pathetic pacing—l ------- +2011 +> I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before. +https://www.greaterwrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking/comment/rYKwptdgLgD2dBnHY + The thing about our crowd is that we have a lamentably low proportion of women (13.3% cis women in the last community survey) and—I don't know when this happened; it certainly didn't feel like this back in 'aught-nine—an enormous number of trans women relative to population base rates (2.7%, for a cis-to-trans ratio of 4.9!!), the vast majority of whom I expect to be AGP @@ -421,7 +405,6 @@ I'm not optimistic about the problem being fixable, either. Our robot cult _alre Becuase of the conflict, and because all the prominent high-status people are running a Kolmogorov Option strategy, and because we happen to have to a _wildly_ disproportionate number of _people like me_ around, I think being "pro-trans" ended up being part of the community's "shield" against external political pressure, of the sort that perked up after [the February 2021 _New York Times_ hit piece about Alexander's blog](https://archive.is/0Ghdl). (The _magnitude_ of heat brought on by the recent _Times_ piece and its aftermath was new, but the underlying dynamics had been present for years.) - Given these political realities, you'd think that I _should_ be sympathetic to the Kolmogorov Option argument, which makes a lot of sense. _Of course_ all the high-status people with a public-facing mission (like building a movement to prevent the coming robot apocalypse) are going to be motivatedly dumb about trans stuff in public: look at all the damage [the _other_ Harry Potter author did to her legacy](https://en.wikipedia.org/wiki/Politics_of_J._K._Rowling#Transgender_people). And, historically, it would have been harder for the robot cult to recruit _me_ (or those like me) back in the 'aughts, if they had been less politically correct. Recall that I was already somewhat turned off, then, by what I thought of as _sexism_; I stayed because the philosophy-of-science blogging was _way too good_. But what that means on the margin is that someone otherwise like me except more orthodox or less philosophical, _would_ have bounced. If [Cthulhu has swum left](https://www.unqualified-reservations.org/2009/01/gentle-introduction-to-unqualified/) over the intervening thirteen years, then maintaining the same map-revealing/not-alienating-orthodox-recruits tradeoff _relative_ to the general population, necessitates relinquishing parts of the shared map that have fallen of general favor. @@ -464,12 +447,10 @@ The "discourse algorithm" (the collective generalization of "cognitive algorithm Or at least—even if there are things we can't talk about in public for consequentialist reasons and there's nothing to be done about it, you would hope that the censorship wouldn't distort our beliefs about the things we _can_ talk about—like, say, the role of Bayesian reasoning in the philosophy of language. Yudkowsky had written about the [dark side epistemology](https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology) of [contagious lies](https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies): trying to protect a false belief doesn't just mean being wrong about that one thing, it also gives you, on the object level, an incentive to be wrong about anything that would _imply_ the falsity of the protected belief—and, on the meta level, an incentive to be wrong _about epistemology itself_, about how "implying" and "falsity" work. - https://www.lesswrong.com/posts/ASpGaS3HGEQCbJbjS/eliezer-s-sequences-and-mainstream-academia?commentId=6GD86zE5ucqigErXX > The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable (!!) - > Makes sense... just don't be shocked if the next frontier is grudging concessions that get compartmentalized > Stopping reading your Tweets is the correct move for them IF you construe them as only optimizing for their personal hedonics @@ -538,9 +519,6 @@ https://twitter.com/extradeadjcb/status/1397618177991921667 the second generation doesn't "get the joke"; young people don't understand physical strength differences anymore -> I've informed a number of male college students that they have large, clearly detectable body odors. In every single case so far, they say nobody has ever told them that before. -https://www.greaterwrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking/comment/rYKwptdgLgD2dBnHY - It would have been better if someone without a dog in the object-level fight could have loudly but disinterestedly said, "What? I don't have a dog in the object-level fight, but we had a whole Sequence about this", but people mostly don't talk if they don't have a dog. But if someone without a dog spoke, then they'd get pattern-matched as a partisan; it _had_ to be me @@ -613,8 +591,6 @@ If hiring a community matchmaker was worth it, why don't my concerns count, too? When I protested that I wasn't expecting a science fictional Utopia of pure reason, but just for people to continue to be right about things they already got right in 2008, he said, "Doesn't matter—doesn't _fucking_ matter." -humans have honor instead of TDT. "That's right! I'm appealing to your honor!" - The boundary must be drawn here! This far, and no further! > just the thought that there are people out there who wanted to be rationalists and then their experience of the rationalist community was relentlessly being told that trans women are actually men, and that this is obvious if you are rational, and a hidden truth most people are too cowardly to speak, until you retreat in misery and are traumatized by the entire experience of interacting with us.... diff --git a/notes/memoir_wordcounts.csv b/notes/memoir_wordcounts.csv index d9adadc..a6ef84a 100644 --- a/notes/memoir_wordcounts.csv +++ b/notes/memoir_wordcounts.csv @@ -555,4 +555,12 @@ 10/23/2023,118860,-255 10/24/2023,118900,40 10/25/2023,118447,-453 -10/26/2023, +10/26/2023,118481,34 +10/27/2023,119033,552 +10/28/2023,119739,706 +10/29/2023,119739,0 +10/30/2023,119807,68 +10/31/2023,119625,-187 +11/01/2023,119647,22 +11/02/2023,119737,90 +11/03/2023,, -- 2.17.1