Does Writing have a Future?

David Gunkel0000-0002-9385-4536

Abstract: In opposition to much of the current scholarly and popular publications on the subject, this essay argues that what large language models (LLM) signify is not the end of writing but the terminal limits of a particular conceptualization of writing that has been called logocentrism. Toward this end, the essay will 1) review three fundamental elements of logocentric metaphysics and the long shadow that this way of thinking has cast over the conceptualization and critique of LLMs and generative AI; 2) trace the contours of a deconstruction of this standard operating procedure that interrupts influential and often-unquestioned assumptions about the concept of the author, the meaning of truth, and the meaning of what we mean by the word “meaning;” and 3) formulate the terms and conditions of an alternative way to think and write about LLMs and generative AI that escape the conceptual grasp of logocentrism and its hegemony.

Keywords: author; Large Language Models; logocentrism; semiology; writing

The titular question of this essay is not mine. It comes from Czech/Brazilian media theorist Vilém Flusser, who once used it as the subtitle to a book he published in 1987—Die Schrift: Hat Schreiben Zukunft? At the time Flusser was writing the dominance of the written word seemed to be in crisis, as new modes of digital expression seemed to herald the end of writing and the beginning of a post-literate age. I reuse/rewrite Flusser’s question 35+ years later, because it again looks as if writing’s future is in question and on the line. This time due to impressive developments in large language models (LLM) and other forms of generative artificial intelligence (AI).

Consequently, it seems prudent at this juncture to reissue Flusser’s titular question. And we can, following Flusser’s own example, begin with a very direct and clear statement: What large language models signify is not the end of writing but the terminal limits of a particular conceptualization of writing that has been called logocentrism. In other words, writing indeed has a future but only if we reconceptualize how we think about writing and write about thinking. The following responds to this need and challenge. And it does so in three steps or movements: 1) I begin by briefly characterizing the three defining characteristics and features of logocentric metaphysics. 2) I will then investigate how large language models disrupt way of thinking by releasing a deconstruction of its organizing principles. 3) The final section concludes by formulating the terms and conditions of an alternative way to think and write about LLMs that escape the conceptual grasp of logocentrism and its hegemony.

Logocentrism

Recent criticism of LLMs and other forms of what is now called generative AI have focused on the way that these applications are little more than “Stochostic Parrots” (Bender, Gebru, and McMillan-Major 2021)—technological devices that generate seemingly logical sequences of words but do not know and cannot understand a word of what they say. Versions of this argument have proliferated since the introduction of publicly accessible LLM applications, like OpenAI’s ChatGPT and Anthropic’s Claude, and have appeared in both the academic and popular literature on the subject.

Consider the following explanation offered by (Bogost 2022) for an op-ed in The Atlantic: “ChatGPT lacks the ability to truly understand the complexity of human language and conversation. It is simply trained to generate words based on a given input, but it does not have the ability to truly comprehend the meaning behind those words.” Or a similar statement provided by Emily Bender, a linguist and co-author of the “Stochostic Parrots” essay, in a profile that was published in New York Magazine: “The models are built on statistics. They’re great at mimicry and bad at facts. Why? LLMs have no access to real-world, embodied referents” (Bender quoted in (Weil 2023)). These statements combine two lines of argument that (Häggström 2023, 4-5) has called the “lack of world model” argument—i.e. “since LLMs do not have direct access to the real world, there is no way for them to have a world model”—and the “lack of symbolic grounding” argument—i.e. “an LLM may seem to speak about chairs using the word ‘chair.’ However, since they have never seen (or felt) a chair, they do not understand what the word actually stands for.”

If the terms of these critical appraisals sound reasonable, they should. It’s just good old fashioned logocentric thinking. Logocentrism is a term that was originally coined by the German philosopher Ludwig Klages in the early 1900s (Josephson-Storm 2017, 221). It refers to the tradition in Western science and philosophy that regards words and language as a fundamental expression of an external reality. We do not have the time or space for a deep dive into the history and consequences of this influential way of thinking. Instead, I will just note three characteristics that are already in play and operationalized by the current conversations and debates regarding Large Language Models.

Words and Things

First, there is a causal hierarchy of words and things, and that hierarchy has been accurately described and formulated by Aristotle in De Interpretatione: “Spoken words are the symbols of mental experience and written words are the symbols of spoken words. Just as all men (SIC) have not the same writing, so all men have not the same speech sounds, but the mental experiences, which these directly symbolize, are the same for all, as also are those things of which our experiences are the images” (Aristotle 1938, 16a3). Thus, there are things, which by way of our senses, produce images in the mind. These are then represented by spoken words, which are subsequently represented by written signs. And if you’re sitting there and thinking to yourself: “Well yeah, that’s just obvious.” That thought itself is evidence of the extent to which logocentrism is the basic operating system of our usual ways of thinking and talking about language.

Technology

Second, writing is a technology. Unlike speech, which is considered to be a natural and inherent capability of the human species and a direct symbol of thought, writing is secondary, artificial, and technical. As (Ong 1995, 81-82) explained in the book Orality and Literacy: “Writing (and especially alphabetic writing) is a technology, calling for the use of tools and other equipment…By contrast with natural, oral speech, writing is completely artificial.” And it is for this reason that writing has already been situated and understood as a form of artificial intelligence. As Plato has Socrates say in the Phaedrus: “And so it is with written words; you might think they spoke as if they had intelligence, but if you question them, wishing to know about their sayings, they always say only one and the same thing” (Plato 1982, 275d).

Expression

Third, writing is useful only to the extent that it is an instrument of expression. As a secondary and derived representation of speech, what matters most with writing is what its progenitor intended to say. (Derrida 1976, 11) explains it this way: “If for Aristotle spoken words are the symbols of mental experience and written words are the symbols of spoken words, it is because the voice, producer of the first symbols, has a relationship of essential and immediate proximity with the mind. Producer of the first signifier, it is not just a simple signifier among others. It signifies ‘mental experiences’ which themselves reflect or mirror things by natural resemblance.” If we ask the question “What is it Derrida is trying to say, here?” that very question—a mode of inquiry which seeks to discover what an author is saying in and by the written word—is logocentrism par excellence.

LLMs and the Deconstruction of Logocentrism

The fundamental challenge (or the opportunity) with large language models and other generative AI systems, like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, is that these algorithms write before or even without speaking, that is, without having access to (the) logos and without an embodied living voice that knows the things about which it speaks. In response to this seemingly monstrous problem, contemporary critiques, like those offered by Bogost, Bender, and others, proceed from and reassert the truth of logocentric metaphysics with little or no critical hesitation. These algorithms, they argue, might be able to arrange words in seemingly intelligible orders, but they do not know what it is they are saying nor does their use of language proceed from a lived and embodied engagement with the real world (Birhane and McGane 2024). Consequently, the problem is not that logocentrism has somehow failed to work in the face of these new technologies of writing. It’s quite the opposite. The problem is that logocentrism works all too well, exerting its influence over our thinking about writing and writing about thinking in ways that go by largely without notice. What makes LLM tech so important and interesting is that it disrupts this way of thinking, and it does so in at least three ways.

Death of the Author

First, it undermines conventional notions of authority, authorship, and responsibility. When confronted with any written document—whether that be a book, a short essay like this, or an email from a name and address that is not immediately recognized—one of the first questions we ask is “Who wrote it?” Responses to this question have typically been resolved by identifying the author, who, it is commonly assumed, speaks to us through the instrumentality of the written text.

But as Michel Foucault explained in the aptly titled essay “What is an Author?” (1969), this concept is not some naturally occurring phenomenon. It was a literary and legal affordance deliberately fabricated at a particular time and place in an effort to determine and decide who is speaking. “The author,” (Barthes 1978, 142-143) explained, “is a modern figure, a product of our society in so far as, emerging from the Middle Ages with English empiricism, French rationalism and the personal faith of the Reformation, it discovered the prestige of the individual, of, as it is more nobly put, the ‘human person.’” Prior to this modern and distinctly European innovation—and one that was developed in response to the earlier technological disruption of the printing press (Jarvis 2024)—there were perhaps writers or generators of texts but no “authors” as we currently understand the term.

And like many theorists of his time, such as Marcel Mauss and Claude Lévi-Strauss, (Barthes 1978, 142) employs the critical foil provided by twentieth-century anthropological discoveries: “In ethnographic societies the responsibility for a narrative is never assumed by a person but by a mediator, shaman, or relator whose ’performance’—the mastery of the narrative code—may possibly be admired but never his ‘genius.’” Outside of the experiences and traditions of European modernism, narratives have been successfully developed, performed, and accumulated without necessarily needing what is called the author.

But if the author—as the principal figure of literary authority and accountability—comes into existence in a particular place and at a specific moment in time, there is also a point at which it would conceivably cease to fulfill this role. It is this disappearance and withdrawal of what had been the principal figure of literary authority that is announced and marked by Barthes’s seemingly apocalyptic title, “Death of the Author.” What this phrase indicates is not the end-of-life of any particular individual or the end of human writing but the terminal limits of the figure of the author as the authorizing agent and guarantee of what is said in and by writing.

Though Barthes and Foucault could not and did not address themselves to large language models, their work on the “author function” anticipates our current situation with algorithmically generated content. What we now have with these generative AI systems are writings without the underlying intentions of some embodied, living voice to animate and answer for what comes to be written. Consequently, LLM generated texts are literally unauthorized, or (what amounts to the same) a kind of “authority without an author” that, as Bernard Stiegler (Stiegler 2008, 31) has written, “inheres in all writing as technics.” But instead of this being a criticism concerning what these AI generated writings lack, it shows us the extent to which the authority for writing—any writing whether human or machine—has always and already been a socially constructed artifice.

And if we prompt ChatGPT to speculate whether this is in fact what Roland Barthes, for example, would have said about large language models, we obtain a response that simultaneously leverages authorial intent while questioning and repudiating it: “Barthes’ famous essay The Death of the Author argues that the author’s intentions and biography should be irrelevant to the interpretation of a text. With LLMs, there is no traditional ‘author’ to attribute meaning to—only a machine recombining patterns based on previous texts. Barthes would likely see this as a radical actualization of his idea, where meaning is entirely in the hands of the ‘reader’ (or user), further erasing the idea of authorial intent” (ChatGPT 2024).

The Means of Meaning

Second, this affects the means of meaning. Once the written text is cut-loose from the controlling interests and intentions of an author, the question concerning significance gets turned around. Specifically, the meaning of a piece of writing is not something that can be guaranteed a priori by the authentic character or ethos of the one who is assumed to be speaking through the medium of the text. Instead, meaning transpires in and from the process of reading and interpreting. Or to put this in the terms of classic communication theory, as initially formalized by (Shannon and Weaver 1949), the message is not something that is determined by a sender who is assumed to have something to say through the instrumentality of the textual medium. Instead, meaning is an emergent phenomenon that results from the receiver’s engagement with the materiality of the text.

And if it is the case that this significance had been customarily attributed to an author, that attribution is—and has always actually and only been—projected backwards from the reader onto a supposed and oftentimes absent author. Meaning making, in other words, is an effect of reading that is then “retroactively (presup)posited” (Žižek 2008, 209) to become its own presumed cause. This flipping of the script on modern literary theory changes the location of meaning-making from the “original” intentions of the author or writer who has “something to say” to the interpretive activity of the reader who makes meaning in or generates it out of the materiality of the written content.

As is written in the text that bears the name of (Barthes 1978, 148): “text is made of multiple writings, drawn from many cultures and entering into mutual relations of dialogue, parody, contestation, but there is one place where this multiplicity is focused and that place is the reader…A text’s unity lies not in its origin but in its destination.” Thus, the meaning of a text—whether it is written by a human author, generated by a Large Language Model, or assembled from the productive interaction and collaboration of both—is situated in the interpreting and meaning-making that is produced in and by reading. Logocentric literary theory actually has had everything backwards and upside down.

This also explains how AI generated content comes to have meaning. The critics are correct when they point out, for instance, that “ChatGPT lacks the ability to truly understand the complexity of human language and conversation” (Bogost 2022). But it would be impetuous for us to conclude from this fact that what the AI generates is non-sense, meaningless, or bullshit (Hicks, Humphries, and Slater 2024). These writings are meaningful, and what they mean is something that happens in the process of our reading, interpretation, and evaluation of the generated content. And this fact is not something that is specific to large language models but is, as Barthes had argued, a defining characteristic of all writing.

Signs of Signification

Finally, the issue is not where meaning is located and produced. What is at issue is the concept of meaning itself. Beginning with Aristotle and persisting in the current critique of AI technology, language is assumed to consist of signs that refer and defer to the signified. When I write the words “large language model,” for instance, it is assumed that those linguistic tokens stand for and refer to some real thing out there in the world, like the ChatGPT application developed by OpenAI. “The signification ‘sign,’” Derrida (Derrida 1978, 281) wrote in the essay “Structure, Sign, and Play,” “has always been understood and determined, in its meaning, as sign-of, a signifier referring to a signified, a signifier different from its signified.”

Following this classical semiology, it has been argued that LLMs manipulate words—or what are also called linguistic tokens—but do not “truly comprehend the meaning behind the words” (Bogost 2022) because they “have no access to real-world, embodied referents” (Bender quoted in (Weil 2023)). In other words, large language models manipulate signs without knowing that to which these tokens refer (or do not refer, which amounts to the same thing). They generate different sequences of signs based not on actual meaning but according to statistically probable arrangements of different words, tokens, or signifiers. Instead of penetrating the surface of the signifier to ascertain the true meaning of the words, LLMs are simply and superficially playing with signs.

But this seemingly common-sense view of how language works is not necessarily the natural order of things. And it has been directly challenged by twentieth-century innovations in structural linguistics, which sees language and meaning-making as a matter of difference situated within the materiality of language itself. “In language,” as (de Saussure 1959) explains, “there are only differences. Even more important: a difference generally implies positive terms between which the difference is set up; but in language there are only differences without positive terms.” Signs, therefore, do not (at least not principally and/or exclusively) come to have meaning by direct reference to things that exist outside the system of signs. Signs refer to and differ/defer from other signs in the movement of what (Derrida 1982) calls différance.

The dictionary provides what is perhaps one of the best illustrations of this basic semiotic principle: words come to have meaning through their differential relationship to other words. In pursuing the meaning of a word in the dictionary, one remains within the system of linguistic signifiers and never gets outside language to the referent or what is typically called the “transcendental signified.” This is the meaning (or at least one of the meanings) of that famous (or notorious) statement that is so often associated with (Derrida 1976, 158) and (Derrida 1993, 148): “There is nothing outside the text.” And this is especially true for large language models, as there is, quite literally, nothing outside the texts on which they have been trained and that they in turn generate from the input of a user prompt.

We could therefore say—by way of remixing a statement appropriated from (Wittgenstein 1995)— that for these generative AI systems: “The limits of their language model mean the limits of their world.” Consequently, what has been offered as a criticism of LLM technology—namely, that these algorithms only circulate different signs without access to the real-world embodied referents—might not be the indictment critics think it is. Large language models are structuralist machines that deconstruct the defining conceptual opposition of classical semiotics.

Conclusions and Projections

Large language models are a significant challenge because what we now have with these technologies are things that write without speaking from an embodied, living voice; a proliferation of texts that do not have nor are beholden to the authoritative voice of an author; and statements the truth of which cannot be anchored in and assured by a prior intention to say something. From one perspective—a perspective that remains bound to the epoch of logocentrism—this can only be seen as a threat and crisis. What is on the line and in the crosshairs is our very understanding of language and the meaning of literature. The future of writing and human communication seems to be in jeopardy.

But from another perspective—one which follows the deconstruction of this tradition that had been developed and documented in 20th century literary theory, this is an opportunity to think beyond and in excess of the limitations of Western metaphysics and its hegemony. Understood in this way, large language models and generative AI do not threaten writing, the figure of the author, or the concept of truth. They only threaten a particular and limited conceptualization—one that is itself not some naturally occurring phenomenon but the product of a particular culture and philosophical tradition. Consequently, instead of being misunderstood as signs of the apocalypse or the end of writing, large language models and generative AI reveal the limits of the logocentric privilege, participate in a deconstruction of its organizing principles, and open the opportunity to think and write differently.

In the end, and to return to the question with which we began, the future of writing depends on how we understand and theorize what is meant by the word writing. If we understand and take it literally, that is, as the process of arranging of words or linguistic tokens in linear sequence on some tangible medium, then writing will indeed continue well into the future. But who or what does that writing and how that affects the meaning of any particular written content, is a question that is and has actually has always been in flux and dynamic. Large language models simply render all of this legible.

References

Aristotle. 1938. Aristotle I: Categories. On Interpretation. Prior Analytics. Cambridge: Harvard University Press.
Barthes, Roland. 1978. “Death of the Author.” In Image, Music, Text. New York: Hill & Wang.
Birhane, Abeba, and Marek McGane. 2024. “Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency.” Language Science 106:101672. https://doi.org/10.1016/j.langsci.2024.101672.
Bogost, Ian. 2022. “ChatGPT Is Dumber Than You Think.” The Atlantic, December 7, 2022. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai- artificial-intelligence-writing-ethics/672386/.
ChatGPT. 2024. “Response to the Prompt: ‘What Would Roland Barthes Say about Large Language Models?’”
Derrida, Jacques. 1976. Of Grammatology. Baltimore, MD: The John Hopkins University Press.
———. 1978. Writing and Difference. Chicago: University of Chicago Press.
———. 1982. Margins of Philosophy. Chicago: University of Chicago Press.
———. 1993. Limited, Inc. Evanston, IL: Northwestern University Press.
Häggström, Olle. 2023. “Are Large Language Models Intelligent? Are Humans?” Computer Science and Mathematics Forum 8 (68): 1–6. https://doi.org/10.3390/cmsf2023008068.
Hicks, Michael Townsen, James Humphries, and Joe Slater. 2024. “ChatGPT Is Bullshit.” Ethics and Information Technology. https://doi.org/10.1007/s10676-024-09775-5.
Jarvis, Jeff. 2024. The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet. New York: Bloomsbury Academic.
Josephson-Storm, Jason A. 2017. The Myth of Disenchantment: Magic, Modernity, and the Birth of the Human Sciences. Chicago: University of Chicago Press.
Ong, Walter J. 1995. Orality and Literacy: The Technologizing of the Word. New York: Routledge.
Plato. 1982. Phaedru. Cambridge: Harvard University Press.
Saussure, Ferdinand de. 1959. Course in General Linguistics. Lodnon: Peter Owen.
Shannon, Claude Elwood, and Warren Weaver. 1949. The Mathematical Theory of Communication. Urbana: University of Illinois Press.
Stiegler, Bernard. 2008. Disorientation. Technics and Time 2. Stanford: Stanford University Press.
Weil, Emily. 2023. “You Are Not a Parrot.” New York Magazine, March 1, 2023. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html.
Wittgenstein, Ludwig. 1995. Tractatus Logico-Philosophicus. New York: Routledge.
Žižek, Slavoj. 2008. For They Know Not What They Do: Enjoyment as a Political Factor. London: Verso.

Acknowledgements

This paper is based on a presentation that was first delivered during the “LLMs and the Patterns of Human Language Use” workshop, which was organized by Christoph Durt, Sybille Krämer, and Anna Strasser and held 29–30 August 2024 at The Weizenbaum Institute, Berlin. A more detailed and elaborate version of the argument can be found in Mark Coeckelbergh and David J. Gunkel Communicative AI: A Critical Introduction to Large Language Models (Polity 2025).