What does it take to establish reference in LLMs? Kripke vs. Austin (Response to Green)

Authors

  • Steffen Koch

DOI:

https://doi.org/10.18716/ojs/phai/2025.11963

Keywords:

Austin, Kripke, Reference, Meaning, Chatbots, Large Language Models

Abstract

Are the texts generated by large language models (LLMs) meaningful, or are they merely simulacra of language? Against a recent trend in AI scholarship that views LLMs as little more than “stochastic parrots,” in Koch (2025), I use a Kripke-inspired causal theory of reference to argue that LLMs can use names and kind terms with their usual referential properties. Green (2025), a response to Koch (2025), rejects the causal-theoretic account of LLM-reference and proposes an Austin-inspired alternative. The present paper defends the Kripkean approach and raises objections to Green’s alternative.

Published

2025-12-18

Issue

Section

Topical Collection "Language and AI"