+At the time, it seemed fine for the altruistically-focused fraction of my efforts to focus on rationality, and to leave the save/destroy/take-over the world stuff to other, less crazy people, in accordance with the principle of comparative advantage. Yudkowsky had written his Sequences as a dependency for explaining [the need for friendly AI](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile), ["gambl[ing] only upon the portion of the activism that would flow to [his] own cause"](https://www.lesswrong.com/posts/9jF4zbZqz6DydJ5En/the-end-of-sequences), but rationality was supposed to be the [common interest of many causes](https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes). Even if I wasn't working or donating to MIRI, I was still _helping_, a good citizen according to the morality of my tribe.
+
+But fighting for public epistemology is a long battle; it makes more sense if you have _time_ for it to pay off. Back in the late 'aughts and early 'tens, it looked like we had time. We had these abstract philosophical arguments for worrying about AI, but no one really talked about _timelines_. I believed the Singularity was going to happen in the 21st century, but it felt like something to expect in the _second_ half of the 21st century.
+
+Now it looks like we have—less time? Not just tautologically because time has passed (the 21st century is one-fifth over—closer to a quarter over), but because of new information from the visible results of the deep learning revolution during that time. Yudkowsky seemed particularly spooked by AlphaGo and AlphaZero in 2016–2017.
+
+[TODO: specifically, AlphaGo seemed "deeper" than minimax search so you shouldn't dimiss it as "meh, games", the way it rocketed past human level from self-play https://twitter.com/zackmdavis/status/1536364192441040896]
+
+My AlphaGo moment was 5 January 2021, OpenAI's release of [DALL-E](https://openai.com/blog/dall-e/) (by far the most significant news story of that week of January 2021).
+
+[TODO: previous AI milestones had seemed dismissible as a mere clever statistics trick; this looked more like "real" understanding, "real" creativity]
+
+[As recently as 2020, I had been daydreaming about about](/2020/Aug/memento-mori/#if-we-even-have-enough-time) working at an embryo selection company (if they needed programmers—but everyone needs programmers, these days), and having that be my altruistic[^altruism] contribution to the world.