Babbling stochastic parrots? A Kripkean argument for reference in large language models
DOI:
https://doi.org/10.18716/ojs/phai/2025.2325Keywords:
Large Language Models, Chatbots, Meaning, Reference, Semantic ExternalismAbstract
Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a more optimistic view. Drawing on classic externalist accounts of reference, it argues that LLM-generated texts meet the conditions of successful reference. This holds at least for proper names and so-called paradigm terms. The key insight here is that the LLM may inherit reference from its training-data through a reference-sustaining training mechanism.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Steffen Koch

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.