Estimating the probability of AI existential catastrophe: Converging on the answer from opposite ends
DOI:
https://doi.org/10.18716/ojs/phai/2025.2854Keywords:
Existential Risk, Multipolar scenarios, alignment, probability, estimationAbstract
In this commentary, I focus on how the survival story approach of Cappelen et al. advances probabilistic estimation of AI doom. While I highly commend Cappelen et al.’s methodology, I make two points: First, in their paper, they – to some extent – neglect “multipolar” survival stories in which there are many different superhumanly intelligent AI systems. Second, this is an instance of the general issue that their methodology threatens to overestimate the probability of AI doom, by overlooking important survival stories. The survival story methodology can be seen as providing an upper bound for the probability of AI doom. Since the traditional methodology provides a lower bound for the probability of AI doom, both methodologies should be combined.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Leonard Dung

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


