> I['m] sorry it made you sad. From my perspective, the question is no[t] "can we still be friends with such people", but "how can we still be friends with such people" and I am pretty certain that understanding their perspective if an important part of the answer. If clarity seems like death to them and like life to us, and we don't know this, IMHO that's an unpromising basis for friendship.
-
-
-
[TODO—
* Jessica: scortched-earth campaign should mostly be in meatspace social reality
* my comment on emotive conjugation (https://www.lesswrong.com/posts/qaYeQnSYotCHQcPh8/drowning-children-are-rare#GaoyhEbzPJvv6sfZX)
On 4 July 2019, Scott Alexander published ["Some Clarifications on Rationalist Blogging"](https://slatestarcodex.com/2019/07/04/some-clarifications-on-rationalist-blogging/), disclaiming any authority as a "rationalist" leader. ("I don't want to claim this blog is doing any kind of special 'rationality' work beyond showing people interesting problems [...] Insofar as [_Slate Star Codex_] makes any pretensions to being 'rationalist', it's a rationalist picnic and not a rationalist monastery.") I assumed this was inspired by Ben's request back in March that Scott "alter the beacon" so as to not confuse people about what the current-year community was. I appreciated it.
-[TODO: "AI Timelines Scam"
- * I still sympathize with the "mainstream" pushback against the scam/fraud/&c. language being used to include Elephant-in-the-Brain-like distortions
- * Ben: "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp."
- * 11 Jul: I think the law does count _mens rea_ as a thing: we do discriminate between vehicular manslaughter and first-degree murder, because traffic accidents are less disincentivizable than offing one's enemies
- * call with Michael about GiveWell vs. the Pope
-]
+Also in early July 2019, Jessica published ["The AI Timelines Scam"](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam), arguing that the recent popularity of "short" (_e.g._, 2030) AI timelines was better explained by political factors, rather than any technical arguments: just as in previous decades, people had incentives to bluff and exaggerate about the imminence of AGI in order to attract resources to their own project.
+
+(Remember, this was 2019. After seeing what GPT-3/PaLM/DALL-E/_&c._ could do during the "long May 2020", it's now looking to me like the short-timelines people had better intuitions than Jessica gave them credit for. I won't say, "How could I have known?", but at the time, I didn't, actually, know.)
+
+I still sympathized with the "mainstream" pushback against using "scam"/"fraud"/"lie"/_&c._ language to include motivated [elephant-in-the-brain](https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain)-like distortions. I conceded that this was a _boring_ semantic argument, but I feared that until we invented better linguistic technology, the _boring_ semantic argument was going to _continue_ sucking up discussion bandwidth with others when it didn't need to.
+
+"Am I being too tone-policey here?" I asked the coordination group. "Is it better if I explicitly disclaim, 'This is marketing advice; I'm not claiming to be making a substantive argument'?" (Subject: "Re: reception of 'The AI Timelines Scam' is better than expected!")
+
+Ben replied, "What exactly is a scam, if it's not misinforming people systematically about what you have to offer, in a direction that moves resources towards you? Investigations of financial fraud don't inquire as to the conscious motives of the perp."
+
+I replied that the concept of [_mens rea_](https://www.law.cornell.edu/wex/mens_rea) did seem necessary for maintaining good incentives, at least in some contexts. The law needs to distinguish between accidentally hitting a pedestrian in one's car ("manslaughter") from premeditated killing ("first-degree murder"), because traffic accidents are significantly less disincentivizable than offing one's enemies. (Anyone who drives at all is probably going to have unlucky analogues in nearby possible worlds who are guilty of vehicular manslaughter.) The manslaughter case was simpler than misinformation-that-moves-resources,[^manslaughter-disanalogy] and it might not be _easy_ for the court to determine "intent", but I didn't see what would reverse the extremely weak principle that intent matters, sometimes.
+
+[^manslaughter-disanalogy]: For one extremely important disanalogy, perps don't _gain_ from committing manslaughter.
+
+Ben replied that what mattered in the determination of manslaughter _vs._ murder was whether there was long-horizon optimization power towards the outcome of someone's death; it wasn't about what sentiments the killer rehearsed in their working memory.
+
+On a phone call, Michael made an analogy between EA and Catholicism. The Pope was fraudulent, because the legitimacy of the Pope's position (and his claims to power and resources) rested on the pretense that he had a direct relationship with God, which wasn't true, and the Pope _knew_ (he _had_ to know, at some level) that it asn't true. (I agreed that this usage of _fraud_ made sense to me.) In Michael's view, Ben's charges against GiveWell were similar: GiveWell's legitimacy rested on the pretense that they were making decisions based on numbers, and they had to know at some level that they weren't doing that.
[TODO: secret thread with Ruby; "uh, guys??" to Steven and Anna; people say "Yes, of course criticism and truthseeking is important; I just think that tact is important, too," only to go on and dismiss any _particular_ criticism as insufficiently tactful.]
[TODO: complicity and friendship]
+-----
+
[TODO: I had a productive winter blogging vacation in December 2019
pull the trigger on "On the Argumentative Form"; I was worried about leaking info from private conversations, but I'm in the clear "That's your hobbyhorse" is an observation anyone could make from content alone]
[TODO: "Firming Up ..." Dec 2019: combatting Yudkowsky's not-technically-lying shenanigans]
+-----
+
[TODO: plan to reach out to Rick 14 December
Anna's reply 21 December
22 December: I ask to postpone this
On 10 February 2020, Scott Alexander published ["Autogenderphilia Is Common and Not Especially Related to Transgender"](https://slatestarcodex.com/2020/02/10/autogenderphilia-is-common-and-not-especially-related-to-transgender/), an analysis of the results of the autogynephilia/autoandrophilia questions on the recent _Slate Star Codex_ survey.
-I appreciated the gesture of getting real data, but I was deeply unimpressed with Alexander's analysis for reasons that I found difficult to write up in a timely manner. Three years later, I eventually got around to [polishing my draft and throwing it up as a standalone post](/2023/Feb/reply-to-scott-alexander-on-autogenderphilia/), rather than cluttering the present narrative with my explanation.
+I appreciated the gesture of getting real data, but I was deeply unimpressed with Alexander's analysis for reasons that I found difficult to write up in a timely manner. Three years later, I eventually got around to [polishing my draft and throwing it up as a standalone post](/2023/Mar/reply-to-scott-alexander-on-autogenderphilia/), rather than cluttering the present narrative with my explanation.
Briefly, based on eyballing the survey data, Alexander proposes "if you identify as a gender, and you're attracted to that gender, it's a natural leap to be attracted to yourself being that gender" as a "very boring" theory, but on my worldview, a hypothesis that puts "gay people (cis and trans)" in the antecedent is _not_ boring and actually takes on a big complexity penalty: I just don't think the group of gay men _and_ lesbians _and_ straight males with female gender identities _and_ straight females with male gender identities have much in common with each other, except sociologically (being "queer"), and by being human.
[TODO: psychiatric disaster, breakup with Vassar group, this was really bad for me
[As it is written](https://www.alessonislearned.com/), "A lesson is learned but the damage is irreversible."
]
-
-
✓ "Unnatural Categories Are Optimized for Deception" [pt. 4]
✓ Eliezerfic fight: will-to-Truth vs. will-to-happiness [pt. 6]
✓ Eliezerfic fight: Ayn Rand and children's morals [pt. 6]
+✓ AI timelines scam [pt. 4]
+
- regrets, wasted time, conclusion [pt. 6]
+
- "Lesswrong.com is dead to me" [pt. 4]
-_ AI timelines scam [pt. 4]
_ secret thread with Ruby [pt. 4]
_ progress towards discussing the real thing [pt. 4]
_ epistemic defense meeting [pt. 4]
+
+_ December 2019 winter blogging vacation [pt. 4]
+_ plan to reach out to Rick [pt. 4]
+
_ Eliezerfic fight: Big Yud tests me [pt. 6]
_ Eliezerfic fight: derail with lintamande [pt. 6]
_ Eliezerfic fight: knives, and showing myself out [pt. 6]
+
_ reaction to Ziz [pt. 4]
_ confronting Olivia [pt. 2]
_ State of Steven [pt. 4]
_ Somni [pt. 4]
_ culture off the rails; my warning points to Vaniver [pt. 4]
-_ December 2019 winter blogging vacation [pt. 4]
-_ plan to reach out to Rick [pt. 4]
_ complicity and friendship [pt. 4]
_ out of patience email [pt. 4]
_ the hill he wants to die on [pt. 6?]
it was actually "wander onto the AGI mailing list wanting to build a really big semantic net" (https://www.lesswrong.com/posts/9HGR5qatMGoz4GhKj/above-average-ai-scientists)
With internet available—
-_ hate-warp tag
+_ comment on "Timelines Scam" re "The Two-Party Swindle"
+_ "they had to know at some level": link to "why we can't take expected values literally"
+_ publication date of "The AI Timelines Scam"
_ "around plot relevant sentences" ... only revealing, which, specifically?
_ what was I replying to, re: "why you actually don't want to be a happier but less accurate predictor"?
_ relevant screenshots for Eliezerfic play-by-play
_ Yudkowsky's LW moderation policy
far editing tier—
+_ clarify Sarah dropping out of the coordination group
+_ somewhere in dath ilan discussion: putting a wrapper on graphic porn is fine, de-listing Wikipedia articles is not
_ maybe current-year LW would be better if more marginal cases _had_ bounced off because of e.g. sexism
_ footnote to explain that when I'm summarizing a long Discord conversation to taste, I might move things around into "logical" time rather than "real time"; e.g. Yudkowsky's "powerfully relevant" and "but Superman" comments were actually one right after the other; and, e.g., I'm filling in more details that didn't make it into the chat, like innate kung fu
_ re "EY is a fraud": it's a _conditional_ that he can modus tollens if he wants
_ Megan (that poem could easily be about some other entomologist named Megan) ... I'm probably going to cut that §, though
_ David Xu? (Is it OK to name him in his LW account?)
_ afford various medical procedures
+_ Buck? (get the story about Michael being escorted from events)
marketing—
_ Twitter