+I didn't want to start a nonprofit, either. I thought our kind of people were smart enough to function without the taboo against giving money to individuals instead of faceless institutions. I had $97,000 saved up from being a San Francisco software engineer who doesn't live in San Francisco. Besides keeping most of it as savings, and spending some of it to take a sabbatical from my career, I was thinking it made sense to spend some of it just giving unconditional gifts to Michael and others who had helped me as a kind of credit-assignment ritual, although I wanted to think carefully about the details before doing anything rash.
+
+On a separate email thread, I ended up mentioning something to Michael that I shouldn't have, due to a previous confidentiality promise I had made. (I can tell you _that_ I screwed up on a secrecy obligation, without revealing what it was.) I felt particularly bad about this given that I had been told that Michael was notoriously bad at keeping secrets, and asked him to keep this one secret as a favor to me.
+
+Michael replied:
+
+> Happy to not talk about it. Just freaking ask. I can easily honor commitments, just not optimize for general secrecy. The latter thing is always wrong. I'm not being sloppy and accidentally defecting against the generalized optimization for secrecy, I'm actively at war with it. We need to discuss this soon.
+
+------
+
+On 2 March 2017, I wrote to Michael about how "the community" was performing (Subject: "rationalist community health check?? asking for one bit of advice"). Michael had claimed that it was obvious that AI was far away. (This wasn't obvious to me.) But in contrast, a lot of people in the rationalist community seemed to have very short AI timelines. "Chaya" had recently asked me, "What would you do differently if AI was 5 years off?"
+
+(Remember, this was 2017. Five years later in March 2022, we were in fact still alive, but the short-timelines people were starting to look more prescient than Michael gave them credit for.)
+
+If we—my sense of the general culture of "we"—were obviously getting gender wrong, plausibly got the election wrong, plausibly were getting AI timelines wrong, and I thought Moldbug and neoreactionary friends were pointing to some genuinely valuable Bayes-structure ... it seemed like we were doing a _really poor_ job of [pumping against cultishness](https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult). Was it maybe worth bidding for a cheerful price conversation with Yudkowsky again to discuss this? (I wasn't important enough for him to spontaneously answer my emails, and I was too submissive to just do it without asking Michael first.)
+
+Michael said there were better ways to turn dollars into opposition to cultishness. Then I realized that I had been asking Michael for permission, not advice. (Of _course_ Michael was going to say No, there's a better way to turn dollars into anti-cultishness, which would turn out to be apophenic Vassarian moonspeak that will maybe later turn out to be correct in ways that I wouldn't understand for eight years; I shouldn't have asked.) I went ahead an emailed Yudkowsky. (Again, I won't confirm or deny whether a conversation actually happened.)