Automation and its Discontents: Leveraging Automation to Safeguard Humanity

Authors

  • Aksel Sterri University of Oslo
  • Peder Skjelbred University of Oslo

DOI:

https://doi.org/10.18716/ojs/phai/2025.11844

Keywords:

Artificial Intelligence, Existential Risk, AI safety, AI Catastrophe, Superintelligent AI, AI Alignment

Abstract

This paper examines strategies for establishing a long-lasting ban on advanced AI research to mitigate existential risks from artificial intelligence. We evaluate Cappelen, Goldstein, and Hawthorne’s proposal to leverage AI accidents as warning shots and find it faces substantial ethical and practical challenges. As an alternative, we propose leveraging social unrest from rapid AI-driven automation. We argue that widespread job displacement could mobilise global labour movements against AI advancement, creating sufficient political pressure for an enforceable international ban on frontier AI research, and that this offers a more viable path to human survial in the face of advanced AI than current alternatives.

Downloads

Published

2025-12-25

Issue

Section

Responses to Target Article