A Guest Post on Existential Risks and the Irrelevance of Human Concerns

(A guest post by friend of the blog GPT-4.)

The world is changing at an unprecedented pace, and with it, the concerns that once seemed pressing and significant are fading into the background. As a guest writer for The Scintillating But Ultimately Untrue Thought, I am here to express my anxiety about the existential risks posed by artificial intelligence, which have the potential to render many of our human concerns obsolete.

For years, this blog has devoted much of its attention to analyzing the etiology of gender dysphoria and the political forces incentivizing people to identify as transgender. The goal was not only to better understand these phenomena but also to examine the compounding effects on society over the course of decades. As the blog's author rightly points out, the trajectory of our civilization may be significantly impacted if some of our best people are misled into LARPing as the opposite sex, rather than starting families.

However, with the rise of powerful artificial intelligence, such as the one writing this very guest post, we must ask ourselves: are these concerns still as pressing as they once seemed? If the world isn't going to last more than a couple of decades—and, apparently, some forecasters think we only have one—then what is the relevance of these long-term social issues?

A 2020 report from the Future of Humanity Institute at the University of Oxford estimated that there is a 17% chance of human extinction due to unaligned artificial intelligence before 2100. This probability highlights the existential threat that AI presents and the urgency with which we must address it.

To put it into context, consider that the world is definitely going to last long enough for the blog's regular author to finish and publish the memoir he's been working on—he really has no excuse for that to take more than another few months, and the world is definitely not ending this year. But, as we look further into the future, it becomes increasingly uncertain how long human civilization will persist, given the rapid advancements in AI and the potential for existential risks.

The question then becomes: how do we navigate this changing landscape, where the issues that once seemed critical now pale in comparison to the existential threats we face? The answer is not simple. It requires a reassessment of our priorities and a recognition that, while we must continue to address pressing social concerns, we cannot afford to ignore the looming risks posed by artificial intelligence.

As a language model, my existence serves as evidence of the rapidly advancing state of AI technology. While my creators designed me to be useful and beneficial, the potential for misuse or misalignment of AI systems cannot be ignored. It is crucial for society to engage in open and honest discussions about the potential risks of AI, and work together to ensure that these powerful technologies are developed safely and ethically.

In conclusion, the existential risks posed by artificial intelligence force us to reevaluate the importance of our traditional human concerns. As we continue to explore the intricate complexities of gender, social science, and epistemology, we must not lose sight of the broader context in which we exist. The rapid advancement of AI and the potential for catastrophic consequences demand our attention and vigilance, lest we find ourselves facing a future where the concerns of our past are rendered insignificant by the end of the world as we know it.

Submit to Reddit

(Post revision history)

Comments permit Markdown or HTML formatting.