Defending Alignment: A Commentary On ‘AI Survival Stories’

Authors

  • Rory Svarc Arb Research

DOI:

https://doi.org/10.18716/ojs/phai/2025.3327

Keywords:

Artificial Intelligence, AI Alignment, AI Catastrophe, Superintelligent AI , Existential Risk

Abstract

This paper criticises the claims of Cappelen et. al (2025) to have provided “significant challenges” to the claim that humanity will not be destroyed by AI. Specifically, I claim that they fail to substantiate their claims that extremely powerful AI systems of the future will engage in destructive conflict with humanity.

Downloads

Published

2025-12-25

Issue

Section

Responses to Target Article