-—if we even have enough _time_ for things like embryo selection to help, if AI research somehow keeps plodding along [even as everything _else_ falls apart](https://www.unqualified-reservations.org/2007/05/antisingularity/). The [GPT-3 demos](https://www.gwern.net/GPT-3) have been tickling my neuroticism. Sure, it's "just" a language model, doing nothing more but predicting the next token of human-generated text. But [you can do a lot with language](https://bmk.sh/2020/08/17/Building-AGI-Using-Language-Models/). As _disgusted_ as I am with my robot cult as presently constituted, the _argument_ for why you should fear the coming robot apocalypse in which all will be consumed in a cloud of tiny molecular paperclips, still looks solid. But I had always thought of it as a long-term thing—this unspoken sense of, okay, we're probably all going to die, but that'll probably be in, like, 2060 or whatever. People freaking out about it coming _soon_-soon are probably just [following the gradient into](https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult) being [a doomsday cult](https://unstableontology.com/2019/07/11/the-ai-timelines-scam/). Now the threat, [and the uncertainty around it](https://intelligence.org/2017/10/13/fire-alarm/), feel more real—like maybe we'll all die in 2035 instead of 2060.
+<a id="if-we-even-have-enough-time"></a>—if we even have enough _time_ for things like embryo selection to help, if AI research somehow keeps plodding along [even as everything _else_ falls apart](https://www.unqualified-reservations.org/2007/05/antisingularity/). The [GPT-3 demos](https://www.gwern.net/GPT-3) have been tickling my neuroticism. Sure, it's "just" a language model, doing nothing more but predicting the next token of human-generated text. But [you can do a lot with language](https://bmk.sh/2020/08/17/Building-AGI-Using-Language-Models/). As _disgusted_ as I am with my robot cult as presently constituted, the _argument_ for why you should fear the coming robot apocalypse in which all will be consumed in a cloud of tiny molecular paperclips, still looks solid. But I had always thought of it as a long-term thing—this unspoken sense of, okay, we're probably all going to die, but that'll probably be in, like, 2060 or whatever. People freaking out about it coming _soon_-soon are probably just [following the gradient into](https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult) being [a doomsday cult](https://unstableontology.com/2019/07/11/the-ai-timelines-scam/). Now the threat, [and the uncertainty around it](https://intelligence.org/2017/10/13/fire-alarm/), feel more real—like maybe we'll all die in 2035 instead of 2060.