Automation and its Discontents: Leveraging Automation to Safeguard Humanity
DOI:
https://doi.org/10.18716/ojs/phai/2025.11844Keywords:
Artificial Intelligence, Existential Risk, AI safety, AI Catastrophe, Superintelligent AI, AI AlignmentAbstract
This paper examines strategies for establishing a long-lasting ban on advanced AI research to mitigate existential risks from artificial intelligence. We evaluate Cappelen, Goldstein, and Hawthorne’s proposal to leverage AI accidents as warning shots and find it faces substantial ethical and practical challenges. As an alternative, we propose leveraging social unrest from rapid AI-driven automation. We argue that widespread job displacement could mobilise global labour movements against AI advancement, creating sufficient political pressure for an enforceable international ban on frontier AI research, and that this offers a more viable path to human survial in the face of advanced AI than current alternatives.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Aksel Sterri, Peder Skjelbred

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


