Call for Papers

1. Call for Commentary for Paper Symposium on “AI Survival Stories: a Taxonomic Analysis of AI Existential Risk by Cappelen, Goldstein and Hawthorne

The new (fully) open access journal “Philosophy of AI” is excited to announce a CFP for a paper symposium on the paper: “AI Survival Stories: a Taxonomic Analysis of AI Existential Risk” by Herman Cappelen, Simon Goldstein, and John Hawthorne

The pre-print of the paper can be found here: https://www.dropbox.com/scl/fi/dokrbj45kva19g8z3iken/AI-Survival-Stories-Cappelen-et-al-29.10.24.pdf?rlkey=j1lx9z9124smghv26h0ffcr19&st=x9e84p14&dl=0

Abstract: Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival stories’, in which humanity survives into the far future. In each survival story, one of the two premises fails. Either scientific barriers prevent AI systems from becoming extremely powerful; or humanity bans research into AI systems, thereby preventing them from becoming extremely powerful; or extremely powerful AI systems do not destroy humanity, because their goals prevent them from doing so; or extremely powerful AI systems do not destroy humanity, because we can reliably detect and disable systems that have the goal of doing so. We argue that different survival stories face different challenges. We also argue that different survival stories motivate different responses to the threats from AI. Finally, we use our taxonomy to produce rough estimates of ‘P(doom)’, the probability that humanity will be destroyed by AI.

The paper symposium will include contributions from Josh Dever, Dmitri Gallow, Seth Lazar, Kate Vredenburgh, Leonard Dung, and others.

If you are interested in contributing a short reply paper of around 1000 words and not exceed 4000 words (excluding the bibliography). Please submit directly to our journal. 

The deadline is March 1, 2025.

Publication in our journal is completely free and every reply will receive a DOI. 

For further information please contact:

Guido Löhr g.lohr@vu.nl
 
 
 
2. CFP for Special Issue on “Language and AI” in new open access journal “Philosophy of AI” 
 
In recent years, stunning breakthroughs in AI have emerged in systems whose primary interaction with human users is verbal: These include chatbots and Large Language Models (LLMs). The success of such systems raises questions about how we should conceptualize their communicative proficiency. Do they perform speech acts in any of the established uses of that notion found in pragmatics and the philosophy of language, or does their communicative proficiency fall below those standards? If so, is that an in-principle difference or one that stands to be overcome with further technological innovation? Does the (in)ability of chatbots and LLMs to make promises, ask questions, issue commands, or make statements in ways relevantly similar to what human beings do carry ethical implications for our relationship with AIs? Finally, would the embodiment of a chatbot, LLM, or other language-using technology inside a social or a humanoid robot carry implications for any of the above questions? 
 
A short bibliography of works that have addressed the issue recently:
  • Arora, C. 2024. “Proxy Assertions and Agency: The Case of Machine-Assertions.” Philosophy & Technology 37. 
  • van Woudenberg, R., Ranalli, C., & Bracker, D. (2024). Authorship and ChatGPT: a Conservative View. Philosophy & Technology37(1), 34.
  • Butlin, P. 2023. “Sharing Our Concepts with Machines.” Erkenntnis 88.
  • Dung, L. 2024. “Understanding Artificial Agency.” The Philosophical Quarterly 7.
  • Green, M., & Michel, J. G. (2022). What might machines mean?. Minds and Machines, 32(2), 323-338.
  • Nickel, P. J. (2013). Artificial speech and its authors. Minds and Machines23, 489-502.
  • Gubelmann, R. 2024. “Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.” Philosophy & Technology 37.
 
Other confirmed contributors are:
Neri Marsili, Steffen Koch, Paolo Monti, Herman Cappelen, Rachel Sterken, Alexander Wiegmann, Markus Kneer, Marta Halina, Paula Sweeney, Marianna Bergamaschi Ganapini and Laura Weidinger.
 
Submissions should be around 8000 words in length (excluding bibliography but including footnotes) and may address any of the above questions, or make a strong case for addressing another question not listed above, but still of relevance to language and AI. 
 
Prepare submissions for blind review.
Deadline for submission: March 1, 2025. 
 
Submissions can be made through the journal website: 
https://philai.net/journal  
 
For further information please contact:
Guido Löhr g.lohr@vu.nl 
 
Or one of the other guest editors: 
jan.michel@hhu.de
mitchell.green@uconn.edu

 

 

3. Call for Papers: Special Issue on "Philosophy of Neuromorphic AI"

Journal: Philosophy of AI
Deadline for Submission: August 31, 2025

Overview:
While conventional digital computers have been the state of the art in AI for at least the last seven decades, a new paradigm is emerging. With Moore’s law coming to an end, computer engineering has turned its attention towards chips that work with mechanisms analogous to those in the brain. So-called neuromorphic hardware consists of physical “neurons” interconnected via “synapses”; it promises to reduce the power consumption of large AI models by two orders of magnitude, thus potentially replacing digital AI implementations within the next few decades. Philosophy is only now starting to consider the potential implications of hardware structures that replicate aspects of the neural mechanisms in the mammalian brain. This special issue is a platform for interdisciplinary researchers to foster the debate on the philosophical implications as well as the potential risks and benefits of the use of neuromorphic technologies.

Topics of Interest:
We welcome papers that engage with the following topics, among others:

  • Can neuromorphic hardware support neural representations?

  • Do neuromorphic chips enable neural experiments by building, rather than simulating, neural networks?

  • How relevant is energy efficiency to computational neuroscience, given its evolutionary advantage in biological systems?

  • In what ways does learning in neuromorphic systems resemble or differ from neural learning in biological systems?

  • Is neuromorphic computation relevant for computational explanation in contemporary neuroscience?

  • Are neuromorphic chips more likely or more capable to implement artificial consciousness than conventional chips?

  • How should we evaluate the predictions of theories of consciousness such as Integrated Information Theory regarding neuromorphic hardware?

  • Are there ethical constraints on building brain-like systems?

Contributors:
We are pleased to announce that the following scholars have already agreed to contribute to this special issue:

  • Mazviita Chirimuuta

  • Wanja Wiese

  • Peter Grindrod

  • Inês Hipólito

  • Derek Shiller

Submission Guidelines:

  • There is no strict word limit. Ideally, manuscripts should be around 8,000 words, prepared in accordance with the journal's formatting guidelines (the use of a reference manager and APA is strongly encouraged).

  • Submissions should be original and not under consideration for publication elsewhere.

  • All submissions will undergo peer-review.

How to Submit: Please submit your manuscripts through the journal's online submission system (https://journals.ub.uni-koeln.de/index.php/poai/about/submissions). Make sure to indicate that the paper is intended for the "Philosophy of Neuromorphic AI" special issue. Papers should be prepared for blind review.

Important Dates:

  • Submission Deadline: August 31, 2025

  • Papers will be published on a rolling basis as soon as they are accepted.


 Contact Information:
For any inquiries, please contact the guest editors:

Johannes Brinz, University of Osnabrück: johannes.brinz@uos.de

Gualtiero Piccinini, University of Missouri

For any inquiries about the journal, please contact the editors in chief, e.g., Guido Löhr, g.lohr@vu.nl

We look forward to your contributions to this exciting topic in the philosophy of artificial intelligence!