Should we be talking about p(doom)?

They’re talking about the world ending again.

Derek Meegan
4 min readJun 6, 2024
Hal from 2001: A Space Odyssey

Unless you are an avid member of tech Twitter (now tech X?) or Reddit, then you likely have not heard of the term “p(doom)”. The structure of the term comes from probability theory, where the “P” stands for probability and the term in parenthesis is the event being considered. Therefore p(doom) in plain English is the probability of doom.

*queue ominous music*

The rise of artificial intelligent systems like ChatGPT has shifted these discussions from niche corners of the internet to mainstream conversations. Suddenly, the concept of human-level artificial intelligence isn’t just sci-fi fantasy but a tangible possibility. This is further underscored by the rapid speed of enhancements in AI technology, particularly large language models (LLMs), which is happening at a pace that continually surprises even experts in the field. Developments in the capabilities of LLMs are also coming from all directions, including new input modalities (vision, audio, etc), increased context length (better ability to process large amounts of information), enhanced inference speed (response time), and improved accuracy. This accelerated progress has led to a surge in funding for AI research and a proliferation of startups, all racing to push the boundaries of what these systems can achieve.

At the same time, there have been a slew of AI researchers, ethicists, and public figures voicing concerns about the potential risks of these advancements. The conversations around “p(doom)” and existential threats have gained traction, fueled by the fear that highly advanced AI systems, specifically artificial general intelligence (AGI) and artificial superintelligence (ASI), could one day pose significant dangers if not properly controlled. “Artificial general intelligence (AGI) is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for” (Amazon, 2024). “Artificial superintelligence (ASI) is a hypothetical software-based artificial intelligence (AI) system with an intellectual scope beyond human intelligence. At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human” (IBM, 2024)

In the past, discussions focused on what humans theoretically thought would happen if AGI were developed; today, the conversations are about what will happen. On one side, we have team doom, like the former OpenAI governance researcher who just predicted that the development of AGI has a 70% chance of destroying humanity (Futurism, 2024). On the other side, we have team optimism, like Sam Altman, CEO of OpenAI, who believes that AI will represent a material breakthrough for humanity and is willing to spend whatever it takes to develop it (Techopedia, 2024). Clearly, there are some differences of opinion at OpenAI and within the tech community at large. So, why does the narrative of impending doom resonate so strongly with us?

The answer lies in our history. Humans have been fascinated with the end of the world for centuries, whether through religious prophecies, natural disasters, or speculative fiction. Here are just a few examples in recent history (Britannica, 2013):

  1. 2012 Maya Apocalypse (2012)
  2. Harold Camping (1994)
  3. True Way (1988)

Obviously, all of these turned out to be hoaxes, but even many of the real existential threats we have faced in the past like nuclear fallout, asteroids, or even global pandemics have raised discussions about the potential downfall of humanity. However, the development of AGI presents a material difference in the makeup of our world, representing a fundamentally new kind of risk. It’s not an external force like an asteroid or a pandemic, but our own creation, with the ability to surpass human intelligence and potentially operate beyond our control. This shift from speculative fear to tangible technological possibility is what makes AGI such a compelling and urgent topic. The stakes are higher because the outcomes are less predictable and the potential impact on our society is unprecedented.

Ultimately, your calculation of p(doom) is likely dependent on your faith in humanity. Our collective decisions will determine the outcome of AGI, which makes it both comforting in that we are in control and discomforting in that we are in control. The development of AGI forces us to confront not just what might happen to us, but what we might bring upon ourselves. Do you trust your fellow humans? Do you trust corporations and regulators on the front line of development? Answers may vary.

As we stand on the brink of this new technological and economic era, the importance of transparency, proper alignment of incentives, and global cooperation cannot be overstated. The immense uncertainty surrounding the outcome of AGI necessitates deep collaboration and wide discussion of its implications and development to ensure AGI benefits all humankind. Hopefully, we will all look back at this talk of the end of the world and laugh (as our AGI servants bring us mocktails and hors d’oeuvres of course).

So that leaves one last question: what is your p(doom)?

--

--

Derek Meegan
Derek Meegan

Written by Derek Meegan

Technology consultant, martial arts instructor, trying to break into part time blogging. Check out my website to find out more about me: derekmeegan.com

No responses yet