Estimating the probability of AI existential catastrophe: Converging on the answer from opposite ends

Authors

  • Leonard Dung Ruhr-Universität Bochum

DOI:

https://doi.org/10.18716/ojs/phai/2025.2854

Keywords:

Existential Risk, Multipolar scenarios, alignment, probability, estimation

Abstract

In this commentary, I focus on how the survival story approach of Cappelen et al. advances probabilistic estimation of AI doom. While I highly commend Cappelen et al.’s methodology, I make two points: First, in their paper, they – to some extent – neglect “multipolar” survival stories in which there are many different superhumanly intelligent AI systems. Second, this is an instance of the general issue that their methodology threatens to overestimate the probability of AI doom, by overlooking important survival stories. The survival story methodology can be seen as providing an upper bound for the probability of AI doom. Since the traditional methodology provides a lower bound for the probability of AI doom, both methodologies should be combined.

Downloads

Published

2025-12-25

Issue

Section

Responses to Target Article