Introduction

Introduction

Christoph Durt10000-0002-2934-1875, Sybille Krämer2
1  Technical University of Munich
2 Leuphana University Lüneburg

Large Language Models (LLMs) have surprised not only laypersons but even the most optimistic researchers working in the field. To an astonishing extent, LLMs produce meaningful text that can provide a good answer to a prompt, retrieve information, summarize or reformulate text, produce outlines, give feedback, and much more that initially required mental work by humans—and all this in written everyday language. Yet this remarkable practical success raises wide-ranging questions that extend far beyond technical capabilities. The very fact that LLMs can participate so effectively in human linguistic practices compels us to reconsider fundamental topics philosophers have studied for millennia, such as language, communication, meaning, speech, writing, authorship, experience, thinking, the mind, and truth.

This special issue explores the philosophical dimensions of LLM-based generative and synthetic media, examining how this emerging technology might reshape traditional concepts while transforming our scholarly reasoning systems and approaches to inquiry. While we focus primarily on text-generating LLMs, many insights also extend to other forms of generative AI that produce audio, images, and videos. The debate surrounding LLM capabilities spans a broad spectrum and remains both open and contentious, with ongoing disagreement about which abilities we can legitimately attribute to today’s AI chatbots.

One way to reconstruct the controversy surrounding LLMs is to distinguish between two fundamental alternatives: (i) LLMs understand human language in a way that is analogous to that of humans, or (ii) LLMs operate through computational processes that produce linguistically coherent outputs without requiring interpretation or understanding in any recognizably human sense.

The alleged capability of LLMs to understand meaning further feeds the idea that contemporary AI is on the way to developing Artificial General Intelligence; at the very least, a “degree of general intelligence” (Manning 2022). Similar to ideas concerning artificial suffering (Metzinger 2021), sentience (Lemoine 2022), or consciousness (Roose 2025), the supposition of LLM understanding makes the technology look on the path to a replication of the human mind. Others “merely” claim that AI can simulate all “feature[s] of human intelligence” (McCarthy et al. 1955), but both expectations contribute to the enthusiastic utopianism that AI is a savior and the solution to “all material problems” of our time (Andreessen 2023). Simply equating human and machinic text production, however, risks overlooking fundamental differences that may be crucial for understanding the capabilities and limits of LLMs.

This volume, in contrast, investigates the novel and complex interplay between humans and LLMs. The contributors share a commitment to exploring the second path—investigating how LLMs can participate meaningfully in human linguistic practices without requiring traditional notions of understanding or intelligence. The contributors acknowledge fundamental differences between AI operations and human language use, cognition, and communication. There is no need to ascribe mental capacities to algorithms and machines or to offer animistic or anthropomorphic explanations for their capabilities. Because the same ends can be achieved by different means, the production of intelligent or meaningful output does not require intelligence or linguistic understanding in the LLMs producing that output.

The skeptical view that LLMs don’t understand language raises a compelling question: How do they produce output that makes sense to humans in response to prompts? The claim that LLMs simply “parrot” human language use (Bender et al. 2021) falls short of explaining their capabilities. Parroting alone can’t account for why LLMs generate output that is a meaningful response to diverse prompts. This shows that something more sophisticated is happening. Some suggest that, after all, LLMs understand language, just in a different way than humans do. Under this view, LLMs demonstrate “new modes of understanding, most likely new species in a larger zoo of related concepts” (Mitchell and Krakauer 2023). This questions the line between understanding and non-understanding, which is in fact not as clear-cut as it might initially seem.

In fact, “understanding” may not be the right concept at all. Attempts to use the concept of understanding to account for the capabilities of LLMs can cause more confusion than clarifying the novel and complex interplay between human-produced and LLM-generated text. To explain how LLMs generate such complex output without necessarily understanding them, the fundamental question of the special issue is: How do LLMs deal with the patterns of human language use in ways that make sense to humans? For the authors of this volume, the notion that LLMs process statistically structures and patterns in human language use is the starting point rather than a conclusion. The authors reconceptualize not simply “understanding” but the wider space of concepts, including authorship, communication, conceptual schemes, context, education, language use, meaning, models, patterns, representation, synthesis, understanding, and world. The ideas presented are fundamental not only for a better understanding of the kinds of exchanges possible with LLMs, but also for questions about how much we can trust them and what uses are reasonable and ethical.

We would now like to highlight basic tenets on which most contributors to this issue would agree. They acknowledge AI’s differences from human language use, cognition, and communication. There is no need to ascribe mental capacities to algorithms and machines and to offer animistic or anthropomorphic explanations. Because the same ends can be achieved by different means, the production of intelligent or meaningful output does not require intelligence or understanding on the LLMs side. The apparent human-like output masks a fundamental difference: LLMs model statistical patterns in vast corpora of text, patterns that humans are usually either unaware of or only tacitly aware of. Humans experience patterns at various levels, often quite vividly, but when we write or read ordinary language, we never compute statistical relations between patterns. Rather, people must interpret and make sense of language to react to linguistic utterances, although the degree and manner of understanding can—culturally and personally—vary greatly. However, meaningful exchange is still possible when some participants have different understandings of the topic, and possibly when some have no understanding at all. In the following, we give a brief summary of each contribution.

Elena Esposito critically examines the concept of intelligence in relation to recent AI developments. While much discourse frames AI as emergent intelligence—whether feared as an autonomous “alien mind” or embraced as augmented intelligence—Esposito argues that such comparisons mislead. AI’s success lies not in replicating human intelligence but in leveraging vast amounts of data to identify patterns and generate responses that appear meaningful to users. Drawing from communication theory, particularly Niklas Luhmann’s systems theory, she proposes shifting from the notion of artificial intelligence to artificial communication, where algorithms facilitate interaction without genuine understanding. This reframes LLMs, emphasizing their impact on communication and societal structures rather than their supposed intelligence, concluding with challenges such as algorithmic bias, misinformation, and AI’s influence on public discourse that demand new regulatory and ethical frameworks.

Wilrich Jeffrey Nieto builds on Esposito’s concept of “artificial communication” to argue that LLMs function not as autonomous intelligent agents but as non-understanding participants in communication that reshape human linguistic practices. LLMs are described as instantiations of what Marx called the “humanization of nature”—objectification of human capacities and embodiments of social relations that both enable and constrain communication. Under capitalism, LLMs become sites of alienation where human linguistic power is extracted, commodified, and redirected toward market imperatives, privileging immediacy, coherence, and legibility over ambiguity and interpretive depth. LLMs reorganize linguistic communication through statistical pattern recognition, transforming writing from deliberation to curation and creating a “profilic self” formed through algorithmic personalization. Transforming artificial communication requires not merely technical improvements but democratic control of digital infrastructures, emphasizing that LLMs reflect our own powers under current social conditions—their problems and potentials are fundamentally human ones.

Xyh Tamura challenges dominant narratives that frame AI, robots, and LLMs through terminologies of interiority like consciousness, intelligence, and sentience. Instead, he suggests a relational framework grounded in interaction. Drawing on theories such as actant affordances and relational personhood, it is argued that technologies already function as social actors by participating in rituals, maintaining emotional bonds, and co-creating meaning within human-technology networks. Rather than asking whether machines possess human-like traits, the analysis emphasizes how personhood, communication, and affect emerge dynamically through situated interactions. Examples from Japanese robotics, chatbot grief mediation, and ritual contexts illustrate how these technologies generate new forms of kinship, emotional resonance, and sociality—not by mimicking humans but by enabling novel relational possibilities. This framing relocates AI not as a failed imitation of human minds but as a co-constitutive partner in cultural and communicative ecosystems.

Sybille Krämer works out a third position beyond the two ideas that LLMs are either blind or sensitive to meaning by describing how LLMs allow new epistemic interactions between individual cognition, the socially distributed mind, and an alien kind of machine intelligence different from human intelligence. Drawing on the “cultural technique of flattening,” which has enabled crucial advancement in the history of cognitive capabilities, Krämer explains that LLMs transform what for humans is meaningful text into calculable proximities within vector spaces. Human and machine language processing are different perspectives on the same phenomenon—the written colloquial language—rather than competing interpretations. Yet both perspectives cannot be adopted at the same time—like Wittgenstein’s duck-rabbit flip image. It is only the ‘otherness’ and alterity between human meaning-sensitivity and machine meaning-neutrality which creates the conditions for productive human-machine collaboration, positioning contemporary AI as an extension of historically established cultural techniques for externalizing cognitive processes.

David Gunkel explores how LLMs challenge logocentric conceptions of writing that have dominated Western thought, including concepts of authorship, truth, and meaning itself. Drawing on media theorist Vilém Flusser’s eponymous question, Gunkel argues that LLMs do not signal the end of writing but rather expose the limitations of logocentrism—a tradition privileging speech over writing, assuming language directly represents reality, and centering authorial intention. LLMs deconstruct three fundamental logocentric elements: they undermine conventional notions of authorship by producing “unauthorized” texts; they shift meaning-making from authorial intent to reader interpretation; and they function as structuralist machines operating purely within systems of linguistic difference without access to external referents. Rather than viewing these as deficiencies, Gunkel suggests they reveal opportunities to reconceptualize writing beyond logocentric constraints, positioning LLMs not as threats to human communication but as catalysts for new understandings of textuality and meaning-making.

Anna Strasser examines AI’s concrete impact on human authorship and the trustworthiness of electronically distributed texts. In the field of education, in particular, fundamental questions arise concerning the conditions and quality of intellectual work when using artificial intelligence tools. Two areas are highlighted: The increasing indistinguishability between texts written by humans and those generated by machines raises questions of authorship. And the inherent unreliability of generative AI raises the question of how much we can trust AI tools at all. Possible answers to these questions are outlined within the field of education and epistemology, and the legitimate use of such tools is discussed. Finally, the risk is formulated and discussed that the use of these epistemic tools could generally cause a deskilling effect.

Hadi Asghari and Filip Biały go a step further into the direction of LLM comprehension by investigating whether LLMs possess conceptual networks akin to human ideologies beyond surface-level patterns and text reproduction. The comprehension of political philosophies of seven widely used LLMs is tested on theories of justice. Using Bloom’s taxonomy as an evaluation framework, the study assessed the LLMs on their recall, application, and reflective capabilities. The results demonstrate significant performance variations, with one exhibiting sophisticated comprehension while others generated confused or generic responses. The findings suggest LLMs may possess internal conceptual maps or networks resembling ideological frameworks, enabling reasoning about novel scenarios consistent with specific philosophical theories. This challenges characterizations of LLMs as mere word frequency models, though their cognitive processes remain fundamentally different from human understanding. The implications extend to both AI research and political theory, where morphological analysis of ideologies could provide valuable insights into studies of meaning within neural networks.

Shane Denson discusses the concept of ‘synthesis’ associated with the production of generative media. He starts from M. Beatrice Fazi's theory that LLMs genuinely generate language as a result of a process familiar from Kant’s philosophy, namely synthesizing a variety of elements into the structural unity of an internal world. However, for Denson, what chatbots produce are not internal linguistic representations that are separate from the external, phenomenal world. Engaging with Donald Davidson’s critique of conceptual schemata and the assumption that humans and technology interactively share the phenomenal reality of human communication, Denson argues, that AI does not produce self-contained and secluded worlds detached from human experiences. By situating LLMs within a broader framework of distributed representation, mediation, and the social nature of cognition, the paper reconsiders the role of AI in shaping linguistic meaning and its implications for our understanding of intelligence, worldhood, and representation.

M. Beatrice Fazi responds to Shane Denson’s disagreements with her paper. She defends and further develops her transcendental argument about LLMs, according to which they construct a representational world within and perform synthetic activities that unify representations into coherent structures. Fazi draws on Kantian transcendental philosophy while rejecting claims that this approach is inherently anthropocentric. LLMs do not mimic human cognitive synthesis but produce outputs that can be interpreted as real language production, which can be alternatively realized by humans or machines. Such a structuralist reinterpretation of Kantian synthesis in terms of its functional aspects provides a more suitable account for the operations of LLMs, where unity is that of a structure, not a self. This allows artificial intelligence to be understood without resorting to anthropomorphic models. This perspective contrasts with Denson’s more phenomenological approach, which seeks to establish continuity between human and machine meaning-making processes.

Christoph Durt argues that LLMs produce meaningful text by transforming “co-text”—numerical relationships between text parts—rather than engaging with full contextual meaning. The transformation of co-text does not require an understanding of meaning, which demands embedding text in the broader context of human language use, including our lived experience in the world. Drawing on philosophical distinctions from Wittgenstein, Derrida, Distributional Semantics, and Denotational Semantics, the analysis shows that although LLMs can effectively model and recombine patterns of language use in sophisticated ways, they fundamentally lack access to the broader communicative, situational, and experiential contexts that ground meaning for humans. LLMs neither simply “parrot” text nor truly “understand meaning.” Instead, they transform co-textual patterns in ways that humans can interpret as meaningful within their own contextual frameworks. This explains how numerical word relationships can produce text that appears meaningful without requiring genuine semantic understanding from the machine itself.

Earlier versions of most of the contributions to this Special Issue were first presented at an international and interdisciplinary workshop “LLMs and the Patterns of Human Language Use,” at the Weizenbaum Institute in Berlin, August 29–30, 2024. The workshop focused on how LLMs can participate in human language games despite the fundamental difference in text production. The workshop emerged from the Focus Group ‘Foundations of Digital Philosophy’ of the German Society for Philosophy (DGPhil), which has also been a fertile ground for further discussion. We thank all participants of the workshop and the Focus Group for their insightful contributions. Members of the group include Anna Strasser, Auris Lipinski, Christian Schröter, Christiane Schöttler, Katrin Becker, Marie-Theres Fester-Seeger, Raphael Brähler, Sabine Thürmel, Sebastian Richter, Sergio Kirichuk, Stefania Centrone, Klaus Wagner as well as other participants that cannot all be named here.

References

Andreessen, Marc. 2023. “The Techno-Optimist Manifesto.” Andreessen Horowitz. October 16, 2023. https://a16z.com/the-techno-optimist-manifesto/.
Lemoine, Blake. 2022. “Is LaMDA Sentient? — An Interview.” Medium (blog). June 11, 2022. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917.
Manning, Christopher D. 2022. “Human Language Understanding & Reasoning.” Daedalus 151 (2): 127–38. https://doi.org/10.1162/daed_a_01905.
McCarthy, John, M.L. Minsky, N. Rochester, and C. E. Shannon. 1955. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” https://rockfound.rockarch.org/digital-library-listing/-/ asset_publisher/yYxpQfeI4W8N/content/proposal-for-the-dartmouth-summer- research-project-on-artificial-intelligence.
Metzinger, Thomas. 2021. “Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology.” Journal of Artificial Intelligence and Consciousness 08 (01): 43–66. https://doi.org/10.1142/S270507852150003X.
Mitchell, Melanie, and David C. Krakauer. 2023. “The Debate Over Understanding in AI’s Large Language Models.” Proceedings of the National Academy of Sciences 120 (13): e2215907120. https://doi.org/10.1073/pnas.2215907120.
Roose, Kevin. 2025. “If A.I. Systems Become Conscious, Should They Have Rights?” The New York Times, April 24, 2025, sec. Technology. https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html.