What does it take to establish reference in LLMs? Kripke vs. Austin (Response to Green)
DOI:
https://doi.org/10.18716/ojs/phai/2025.11963Keywords:
Austin, Kripke, Reference, Meaning, Chatbots, Large Language ModelsAbstract
Are the texts generated by large language models (LLMs) meaningful, or are they merely simulacra of language? Against a recent trend in AI scholarship that views LLMs as little more than “stochastic parrots,” in Koch (2025), I use a Kripke-inspired causal theory of reference to argue that LLMs can use names and kind terms with their usual referential properties. Green (2025), a response to Koch (2025), rejects the causal-theoretic account of LLM-reference and proposes an Austin-inspired alternative. The present paper defends the Kripkean approach and raises objections to Green’s alternative.
Published
Issue
Section
License
Copyright (c) 2025 Steffen Koch

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


