Babbling stochastic parrots? A Kripkean argument for reference in large language models

Authors

  • Steffen Koch Bielefeld University

DOI:

https://doi.org/10.18716/ojs/phai/2025.2325

Keywords:

Large Language Models, Chatbots, Meaning, Reference, Semantic Externalism

Abstract

Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a more optimistic view. Drawing on classic externalist accounts of reference, it argues that LLM-generated texts meet the conditions of successful reference. This holds at least for proper names and so-called paradigm terms. The key insight here is that the LLM may inherit reference from its training-data through a reference-sustaining training mechanism.

Downloads

Published

2025-06-23

Issue

Section

Special Issue "Language and AI"