Defending Alignment: A Commentary On ‘AI Survival Stories’
DOI:
https://doi.org/10.18716/ojs/phai/2025.3327Keywords:
Artificial Intelligence, AI Alignment, AI Catastrophe, Superintelligent AI , Existential RiskAbstract
This paper criticises the claims of Cappelen et. al (2025) to have provided “significant challenges” to the claim that humanity will not be destroyed by AI. Specifically, I claim that they fail to substantiate their claims that extremely powerful AI systems of the future will engage in destructive conflict with humanity.
Downloads
Published
2025-12-25
Issue
Section
Responses to Target Article
License
Copyright (c) 2025 Rory Svarc

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


