The emergence of large language models (LLMs), parameterized neural networks trained on massive textual corpora, represents a fundamental transformation of linguistic practices. Far from being mere “neutral tools,” they actively reshape our language use, reorganize discursive habits, and alter our orientation toward both language and world. However, the prevailing discourse of “artificial intelligence” frames LLMs as autonomous cognitive agents detached from human activity. This framing obscures the extent to which such systems are embedded in human practices and social relations. It also fuels both techno-utopian fantasies and dystopian anxieties (see (Durt 2023, 105)) that isolate LLMs from the conditions of their emergence and use.
Elena Esposito’s theory of artificial communication offers a critical alternative. By treating LLMs not as intelligent agents but as non-understanding participants in communication, she emphasizes how they generate statistically patterned texts without reference to meaning (Esposito 2022, 5–9). This reframing situates LLMs within human communicative life as artifacts that mediate, reflect, and reshape it. This paper argues that Esposito’s perspective helps theorize LLMs as instantiations of what Karl Marx called the “humanization of nature” (Marx 1975, 302), the process by which humans realize their powers, needs, and life through conscious activity (Araujo 2017, 63, 91–102). Viewed this way, LLMs function as objectifications of human capacities and embodiments of social relations, (Marx 1975, 275–79, 301–2); (Marx 1996, 189, 753); (Marx and Engels 1976, 43, 231); (Araujo 2017, 53–57, 105–71), becoming both our “inorganic body” and sites of alienation under capitalism, and making artificial communication new terrain for transformative praxis (Marx 1975, 276, 278–79).
Drawing from Niklas Luhmann’s systems theory, Esposito conceptualizes communication not as the transmission of shared intent but as the recognition of meaningful output by the receiver (Esposito 2022, 7–8). In this frame, LLMs function as competent communicators by introducing “virtual contingency,” generating plausible, context-sensitive responses that users find informative or stylistically appropriate. This is achieved through pattern recognition, second-order observation, and performative prediction grounded in statistical recombination of traces of language use. The result is what Esposito calls “controlled lack of control” (Esposito 2022, 10). The system produces spontaneous outputs while operating within parameters that ensure contextual appropriateness and stylistic coherence. This balance between variance and constraint makes artificial communication useful. It introduces enough unpredictability to be interesting while preserving enough structure to keep outputs relevant. Users experience this as receiving responses that could not have been precisely predicted, yet still meet expectations. This is the very design that allows LLMs to function as partners in communication.
This functionality reshapes linguistic practices. Users increasingly work by prompting, selecting, arranging, or revising generated text. LLMs assist in drafting and editing, offering stylistic alternatives and relieving cognitive load, especially under conditions of urgency or limited attention. Writing shifts toward curation rather than deliberation. Instead of composing from scratch, users retrieve and adjust plausible formulations. Communication becomes a feedback loop. Models predict users’ expectations based on past inputs, while users learn how to phrase prompts to elicit desired outputs. One manifestation of this influence is the rise of listicles, which reflect algorithmic preferences. Lists avoid abstraction and simplify textual structure, making them ideal for machine processing (Esposito 2022, 19, 28–9). This contributes to what David Weinberger (cited in (Esposito 2022, 27)) calls a “new order of order,” in which knowledge organization prioritizes machine legibility over thematic or conceptual coherence. Meaning is deferred to user interpretation after algorithmic generation. The resulting communication emphasizes clarity and retrievability at the cost of relational and interpretive depth. As with digital photography, where moments are recorded to be shared more than experienced (Esposito 2022, 80–3), language is increasingly produced for delivery rather than reflection.
The ability of LLMs to generate virtual contingency relies not on human-like memory but on its absence. LLMs do not remember narratively or selectively. They encode prior data as weighted associations, operating within an “eternal present” that reinforces dominant patterns, lacks historical depth, and resists novelty (Esposito 2022, 71). This absence of memory links directly to their predictive logic. Since they do not remember or reflect, they instead project future continuity from accumulated past patterns. Esposito’s concept of “performative prediction” or “manufacturing the predicted future” captures how LLMs shape linguistic practice (Esposito 2022, 97–9). By suggesting phrasing or structure, they guide future usage and consolidate communicative tendencies through repetition.
LLMs expand access to language production, especially for non-experts or marginalized speakers, enabling engagement with genres or formats that might otherwise seem inaccessible. Yet this democratization is framed by “anonymous personalization” (Esposito 2022, 47–9). LLMs personalize responses through behavior-based patterns, not by recognizing individual subjectivity. Users adapt to the system’s logic, often handing textual production to tools that do not perceive nuance. Esposito calls this process “mass personalization and general individualization—specific and local, for everybody, everywhere” (Esposito 2022, 54). As a result, the boundary between self-expression and algorithmic projection becomes difficult to distinguish.
While LLMs produce fluent outputs, the user still takes the role of assigning meaning, determining relevance, and integrating the result within their own communicative horizon (see (Esposito 2022, 28)). As algorithms surface patterns invisible to human readers, they generate “provocative” outputs that invite interpretation. As Esposito notes, the “zoomed-out perspective on texts” becomes a condition for knowledge (Esposito 2022, 38–9, 42). The opacity and unpredictability of these systems encourage reflection, creating a kind of discovery through the hermeneutic act. Yet because LLMs prioritize coherence, they struggle with strategic ambiguity. Their outputs tend to resolve meaning rather than preserve openness, lacking the intentional vagueness that characterizes human expression (Esposito 2022, 110). In response, users must edit, reframe, or contest outputs to reintroduce ambiguity and preserve expressive depth. These acts of interpretative resistance affirm user agency and create space for counter-practices that assert human authorship. The reemergence of handwritten letters and “human-only” writing reflects this pushback against the flattening effects of artificial communication (Esposito 2022, 84–6).
These dynamics converge on Esposito’s challenge of “control over control,” which names the difficulty of managing systems that reshape communication without transparency or semantic grounding. LLMs operate through procedures that bypass meaning while intervening in domains where meaning, ambiguity, and context are essential (Esposito 2022, xii–iv, 107–11). In these systems, contingency is statistically modeled, ambiguity minimized, and expressive nuance displaced. This raises the question of how users can intervene in or redirect systems they do not fully understand. For Esposito, algorithmic issues like data bias or surveillance are not technical failures. They are symptoms of algorithmic success, where systems replicate the social patterns found in their training data (Esposito 2022, 109). The problem is not only how to improve technical performance. It is how to socially redirect artificial communication to preserve ambiguity, recover interpretive depth, and enable shared authorship in environments that personalize impersonally and predict without understanding.
That technologies can transform human practices without possessing intelligence is a well-established insight. Phenomenological philosophy has long emphasized that tools shape how we perceive and engage with the world. A walking stick, for example, is not merely an external aid but becomes integrated into perception and action, realizing bodily capacities and reorganizing the field of possible movements (Merleau-Ponty 2010, 144–54). Working in this tradition, Christoph Durt develops a phenomenology of digitization that highlights its role in shaping orientation, our basic way of situating ourselves and relating to our environment. Digitization, he argues, is not merely the spread of digital devices but a metaphysical reorientation (Durt 2023, 112). Orientation is shaped not only by explicitly navigational technologies like maps or GPS but also by those that indirectly alter our engagement with the world (Durt 2023, 101–2). From this perspective, the transformations in linguistic practice discussed earlier are not superficial shifts in style or medium. They signify a more fundamental change in the way subjects inhabit their communicative environments and produce meaning within a digitalized horizon (Durt 2023, 109–12).
LLMs reshape orientation not by thinking for us, but by mediating and reorganizing the conditions under which language and thought occur. Don Ihde’s postphenomenology and Lambros Malafouris’s material engagement theory (Ihde and Malafouris 2019) complement this insight. They emphasize that humans, as Homo Faber, are shaped by their ongoing transaction with the material world. This concept of transaction differs from interaction because it treats human beings and their environment as inseparable components of a single process (Ihde and Malafouris 2019, 199–200). As they put it, “We make the things that in turn make us” (Ihde and Malafouris 2019, 196). Technologies do not merely materialize human activity. They participate directly in the formation of cognition and subjectivity, shaping how humans become what they are through practical engagement with the material world (Ihde and Malafouris 2019, 200). LLMs participate in this process, co-constituting linguistic subjectivity and reshaping communicative agency.
LLMs reorient linguistic practices by privileging immediacy over temporality, surface coherence over layered intentionality, and legibility over ambiguity. The rise of listicles, as discussed earlier, is emblematic of this reorientation. Tasks like editing or rewriting now occur through collaborations with artificial systems that produce text statistically, turning deliberation into output-oriented generation. A user–LLM transaction begins with a prompt, itself shaped by the user’s already digitalized horizon (Durt 2023, 109–12). This engagement unfolds through a feedback cycle in which prompting becomes an expressive act and the model participates in the communicative process. This is what Ihde and Malafouris describe as an embodiment relation, where the technology becomes a transparent medium of activity rather than a discrete object of attention (Durt 2023, 205).
Meanwhile, the virtual contingency of LLMs is grounded in their training on vast text corpora. This process introduces a hidden triadic relationship involving the user, the model, and the anonymous authors whose traces the model recombines. As an orientation scaffold, the LLM mediates how users engage with content that has been stripped of its original context, reassembled statistically, and delivered through opaque computation. This dynamic marks a shift in mediated communication. No single identifiable speaker is addressing a public audience as one finds in traditional mass communication. Instead, the LLM stages a pseudo-dialogue between the generalized user and the sedimented traces of anonymous others. Esposito calls this “social information”—a machine-readable representation of social intelligence embedded in language forms (Esposito 2022, 3, 65). No speaker is co-present with the listener because the model’s algorithmic memory exists in an “eternal present” that makes it capable of producing plausible responses based on anonymous linguistic material. Thus, the scaffold allows the user’s hermeneutic act to access textual meaning from social information while constructing it within the context of their own communicative horizon. The user adapts to the response and makes a new prompt.
The prompt, therefore, becomes a new locus of intention, one that initiates a transactional process wherein expressive agency unfolds through anticipatory adjustment to the model’s generative tendencies. Prompting emerges as a prosthetic expressive act shaped by the user’s awareness of the model’s tendency to generate plausible continuations from past linguistic data. This shift from composition to curation transforms authorship. Expression now takes place through collaboration with an algorithm that predicts possible next steps, which alters the structure of authorship by embedding it in a system that both enables and limits what can be said. Simultaneously, the model’s “eternal present” restructures memory. It does not remember to recall or narrate the past, but instead uses the past to generate plausible futures. In such a system, language is generated for circulation, not reflection, displacing narrative coherence with associative retrieval. Yet paradoxically, the same algorithmic memory can also surface patterns otherwise inaccessible to unaided cognition. The absence of narrative filtering allows the model to produce unexpected combinations that may invite new understanding. These provocations introduce epistemic tension between the system’s flattening tendencies and its capacity to shape discovery. The user’s role becomes interpretative and reconstructive, sifting through flattened outputs for emergent meaning.
Thus, the scaffolded pseudo-dialogue discussed earlier appears as an artificial dialogue structured by the recursive loop of prompt, prediction, and interpretation. However, the user encounters a statistically recombined reflection of themselves based on prior usage and statistical probability. There is, therefore, co-constituted cognitive mediation, in which the user’s expressive activity is shaped by and integrated with the model’s probabilistic logic. Intentionality becomes distributed (Ihde and Malafouris 2019, 205–8) as the user and model jointly shape expression through their combined affordances. This dynamic reproduces what, after Hans-Georg Moeller and Paul J. D’Ambrosio (Moeller and D’Ambrosio 2023), may be called the profilic self, a subject constituted through second-order observation, formed by engaging with algorithmic personalization derived from prior usage patterns and generalized social data. At this point, the boundary between genuine self-expression and system-generated projection becomes indistinct. Ultimately, what presents itself as dialogue reveals itself as an extended monologue. The triadic relationship among user, LLM, and anonymous others resolves into a self-referential cycle in which the user encounters a mediated version of their own language profile. Artificial dialogue becomes a medium of mediated self-reflection, where the conditions of language production reshape authorship, memory, and identity.
Finally, LLMs constitute novel linguistic environments where relevance is pre-filtered by prediction, and where predictions invite new interpretations. Users navigate an extended field that is syntactically responsive but procedurally opaque. Models resist strategic ambiguity, compelling users to re-inscribe ambiguity. This new space, which straddles between human nuance and machine clarity, shapes the profilic self and demands that users not only interpret but also actively restore the ambiguity that the system suppresses. The transaction becomes a site of embodied hermeneutic engagement, where algorithmic fluency and human interpretive agency collide.
While Ihde and Malafouris expand the concept of Homo Faber beyond tool-making to include how tools shape us, their insight implies something further. If we make tools and tools make us, then we are engaged in making ourselves (Ihde and Malafouris 2019, 196). It is we who perform the creative material engagement that recursively forms our being. This aligns with Marxist phenomenology (Araujo 2017a), which is also an enactivist philosophy. As Paul Loader argues, it challenges traditional distinctions and emphasizes the unity of subject and object in praxis (Loader 2015). In Marxist terms, this self-forming activity is called labor, production, praxis, or human life-activity ((Marx 1975); (Marx 1986); (Marx 1996); (Marx and Engels 1976)). This is not a narrow economistic usage but refers to purposeful, sensuous, socially mediated practice that includes seeing, eating, interpreting, or listening to music. It is the capacity to make our own activity the object of reflection. This transformation of adaptive capacities into deliberate praxis is how we engage in what Marx calls “life-engendering life” (Marx 1975, 276). This involves our power of objectification, or the realization of human faculties in shared sensuous forms. Each faculty finds expression in specific objects—the eye in visual forms, the ear in sound, and so on. As we act on the world, we leave traces of ourselves and encounter these traces as confirmations of existence. The more the world reflects these capacities, the more it becomes a human world, bearing our imprint. One becomes more oneself by seeing one’s powers realized in the world, across domains of sense and activity (Marx 1975, 277, 301–2); (Araujo 2017b, 53–57).
However, as Ihde and Malafouris clarify, this is not a claim of human exceptionalism. Rather, it describes how our being is constituted through relationships and shaped by our embeddedness in the world (Ihde and Malafouris 2019, 197, 209). For Marx, human life is inseparable from nature, which he considers humanity’s inorganic body, on which we depend physically and spiritually. To live on nature is to engage in a continuous metabolism with it: transforming, incorporating, and being shaped in turn. In this process, nature comes to relate to itself through human activity (Marx 1975, 275–76). What distinguishes this process is that it organizes nature in purposive, historically situated ways. Humanized nature is the world transformed by human capacities and is also the condition for their development (Marx 1975, 278, 301–2). Tools and devices exemplify this metabolic process. They express a characteristically human way of relating to nature and reveal production as the site where this dialectic unfolds ((Marx 1996, 189); (Marx 1987, 330). Language—spoken, written, or artificially generated—also offers a unique example. It is the sensuous actuality of thought arising from social activity and practical needs. Language is a direct manifestation of real life-activity (Marx and Engels 1976, 43–4, 304, 446–7).
This leads back to Esposito’s critical insight. She suggests that “maybe our society as a whole becomes ‘smarter’ not because it artificially reproduces intelligence, but because it creates a new form of communication using data in a different way” (Esposito 2022, 5). LLMs exemplify this shift. They embody a historically specific mode of life-activity and its corresponding humanized world. This world is the digitalized world, and in it, communicative capacities are mediated by newly developed technical structures. Digitalization is, in this sense, the objectification of human potential within a reorganized material environment. Rather than being built solely for us, this world is built by us as we rework natural and social materials into tools, infrastructures, and life patterns.
LLMs reorganize aspects of human symbolic capacity by modeling the form and structure of language in ways optimized for computational processing. They extract and recombine syntactic patterns, semantic associations, and statistical regularities, enabling functions such as pattern detection, probabilistic inference, and combinatorial generation. Through techniques like tokenization, embedding, and modeling, linguistic features are formalized into discrete, operable elements. This continues a long trajectory of language standardization visible in the development of grammars, dictionaries, and typographic conventions, and extended today through digital protocols (see (Esposito 2022, 19–29)). These operations support systems that are designed for efficiency, precision, and interoperability, in line with historical pressures toward compressed workflows, high informational throughput, and interdependent communicative tasks.
LLMs enable new forms of communication design by generating outputs that are syntactically coherent and stylistically adaptable across varied digital interfaces. Their outputs support tasks where language must meet constraints of sequencing, clarity, and audience-specific framing, within communicative environments already shaped by long-standing norms of intelligibility. LLMs mediate expression in conditions that demand the rapid management of tone, register, and voice, especially in dialogic or multi-party settings. As a result, they enable reflexive control over language use, allowing users to iteratively revise, adapt, or extend their communicative intent. These affordances reshape how symbolic action is performed in technically mediated environments, combining algorithmic structuring with interpretive responsiveness.
LLMs participate in the mediation of memory by organizing prior linguistic outputs into statistical potentials for future generations. They reconfigure past discourse into computationally accessible traces that enable the recombination of stylistic elements, phrasings, and discursive conventions. This contributes to a longer historical movement in which memory is recorded and reorganized—from oral transmission, to inscription, to databases, and now to language models. On this basis, LLMs also support reflexive identity practices, providing tools for modulating tone, experimenting with stylistic voice, and constructing communicative self-presentation across digital platforms. Templates, suggestions, and contextual reframings enable users to engage language as a site of ongoing self-construction. These functions reflect a reciprocal conditioning between human communicative faculties and computational infrastructures.
Thus, LLMs objectify particular human capacities attuned to the digital present, such as the ability to coordinate meaning across distance, manage complexity through formal distillation, recall through traces, and signify across interpretive frames. They express how nature, in digital form, becomes a mirror for evolving human powers that are historically formed, technically mediated, and socially exercised. As artificial communicators, they emerge as that component of humanized nature that begins to speak back in the logic we have inscribed upon it.
If artificial communicators such as LLMs are objectifications of human life-activity, then they are also embodiments of historically situated social relations ((Marx 1986, 94); (Marx 1996, 84)). These relations do not merely form a backdrop for human activity. They are enacted through activity, embedded in its structure, and reproduced in its repetition. If the world is our inorganic body, then it is also the medium through which we encounter one another. Communication is one such encounter, in which language serves both as an expression of the world and as a medium through which we share it ((Marx and Engels 1976, 43–4, 447); (Engels 1987, 455)). From this perspective, LLMs bear the imprint of the social conditions that shaped their emergence. However, when this humanized world confronts us as something alien, indifferent, or dominating, it ceases to be a transparent embodiment of our activity and becomes the manifestation of estrangement, an activity performed under the control of another (Marx 1975, 278–9).
Under late capitalist social relations, labor, time, and human potential are shaped by the logics of financialization, globalized production, and the commodification of social life. Capital accumulation no longer resides only in the traditional domain of production but extends value extraction to everyday culture ((Martin 2009); (Tsing 2009); (Jessop 2012); (Nielsen 2014)). Neoliberal ideology legitimizes these relations of exploitation, dispossession, and domination by recasting dependence as autonomy, compulsion as choice, and insecurity as opportunity. It teaches individuals to see themselves as self-investing bundles of assets, responsible for optimizing productivity and marketability (Holborow 2018, 5–6). Alienation becomes internalized. The fragmentation of labor and the promotion of hyper-individualized performance are framed as conditions for self-realization. Digitalization embeds these relations in the technical forms of platforms, infrastructures, and feedback systems.
In the context of language use, digitalization extracts observable patterns from acts of communication. These include tone, lexical choices, pacing, and interaction rhythms. Such elements are converted into standardized data, linked to user profiles, and fed into the circuits of capital. LLMs operate within this ecosystem. They extract value from communicative activity while presenting themselves as tools of convenience or personalization. The design of the user interface, the logic of prompt and response, and the tracking of user engagement all serve the broader imperative of capital accumulation.
The production and operation of LLMs depend on the alienated labor of countless individuals ((Marx 1975, 278–79); (Holborow 2018, 2–3)). It includes unpaid contributors, content moderators, annotators, engineers, hardware laborers, and data pipeline managers. The infrastructure of these systems is stratified along global lines of race, gender, and geography. Some contributions are explicitly documented, like labeled datasets or annotation protocols. Others are embedded in recursive content circulation, as data is scraped, reused, and recomposed in training sets. LLMs repackage the output of writers, editors, and developers into sequences that are decontextualized, anonymized, and reassembled for new outputs. Corporations transform these traces into marketable content, converting living language into a quantifiable, monetizable resource.
Workers across domains interact with systems that constrain their expression through platform design, client demands, and algorithmic thresholds. Whether writing for customer support, marketing, or academic instruction, users generate texts that must align with platform intelligibility, brand image, and audience metrics. These constraints operate at multiple levels. Interface layouts, feedback systems, and editorial filters shape what can be said and how it must be delivered. At the cognitive level, users adapt to system tendencies by adjusting prompts, revising text, or mimicking stylistic cues. Though probabilistic, LLM output also relies on the user’s judgment, preference, and strategic sense of context. Expressive agency becomes a negotiation within a narrowed range of options. While the system may reduce effort, it also channels expression toward dominant templates and expectations. Texts produced in this way are not authored in the traditional sense. They are shaped by conditions organized to extract communicative value.
This change affects not just the message but the speaker. Users are not only generating output. They are involved in a process of subject formation mediated by metrics, templates, and platforms. Identity is filtered through branding norms and quality indicators. Language use becomes a site where ideological conditioning occurs, shaping how individuals understand and present themselves. The ideal speaker is one responsive to analytics, optimized for engagement, and legible within institutional norms. The figure of the entrepreneurial subject who curates voice, monitors performance, and invests in visibility becomes the model through which language is performed and assessed. This engenders a hyper-individualized identity managed by systemic forces.
Yet to recognize the alienation of communicative power is not to claim that meaning has vanished. Language continues to express life, but it does so under alienated conditions. Each act of communication affirms our creative faculties even as it reflects their estrangement. With LLM-assisted expression, the problem is not the loss of meaning. It is the way meaning is redirected through systems that reflect and reproduce the separation of individuals from their full realization. Artificial communication must be understood within broader patterns of alienation. As Esposito notes, the opacity of algorithmic systems represents a “controlled lack of control” that demands a reevaluation of the social and communicative dynamics in which they function (Esposito 2022, 105). The alienating effects of such systems stem less from their technical complexity than from their integration into exploitative economic logics.
Agency does not disappear under these conditions. Rather, it is reconditioned. The task is not to restore a pure or autonomous form of authorship, but to grasp how language practices remain as sites of negotiation and resistance. The profilic self is both the product and agent of this change. It navigates systems shaped by rating schemes, engagement incentives, and visibility metrics. Yet users still make choices. They edit, rephrase, ignore system suggestions, or deliberately deviate from prescribed norms. Through such acts, they reintroduce complexity and preserve human responsiveness within technically mediated contexts. Expression continues to be situated, adaptive, and relational. Even within systems shaped by alienation, the capacity for transformation remains.
LLMs are products of historically specific social relations. They embody the contradictions of capitalist society, where the objectification of human capacities into digital forms becomes a site of both affirmation and alienation. In Marxist phenomenology, objectification is not inherently alienating; it becomes so when human capacities are abstracted and appropriated through private ownership, stratification, surplus extraction, and instrumental control. Yet alienation remains contingent, not inevitable (Araujo 2017b, 342). Because LLMs emerge from these contradictions, they also gesture beyond them. They operate between democratization and commodification, expressive potential and epistemic narrowing, openness and standardization. The irreplaceable interpretive role of users and the emergence of new expressive possibilities suggest that artificial communication can be reoriented through transformative praxis.
The task is not to reject artificial communication but to transform the social relations and institutional structures that shape it. Reclaiming objectification for non-alienated purposes requires shifting from alienating systems to democratically governed, epistemically transparent, and cooperatively managed digital environments. This entails challenging the neoliberal digital order, which prioritizes speed, efficiency, and behavioral prediction over dialogue, historical memory, and interpretive plurality. Such transformation bridges the divide between design and use, a divide capitalism sustains through hierarchical control and restricted access. Artificial communicators could instead become shared instruments of coordination, enabling mutuality and situated authorship through platforms co-designed and co-governed by developers, users, maintainers, and cultural workers. Communication technologies would then support shared meaning rather than surveillance or market-based conformity.
LLMs can automate routine discursive tasks, giving users more time for collaborative, reflective, or creative activities. Realizing this potential requires social planning that enhances autonomy rather than displacing labor into more precarious or unpaid forms. The neoliberal collapse of work-life boundaries can be repurposed. Integrated into non-market relations, LLMs could support ways of life grounded in social care and shared purpose. The home, currently a site of platform capitalism, surveillance, and unpaid labor, could be reconceived. Freed from accumulation imperatives, it may become a space of self-directed, cooperative production, where artificial tools support freely associated life. Under democratic control, the collapse of work-home boundaries could foster meaningful life integration.
This reorientation would also affirm embodied, dialogical, and plural language practices. Modes like handwriting, collective authorship, and poetic ambiguity could counter optimization-driven norms. LLMs would become part of a plural communicative ecology. Their outputs would be shaped by users embedded in communities that sustain linguistic diversity, social memory, and critical literacy. Such outcomes require participatory design in both training data and interfaces. When outputs are interpreted within shared life-worlds, language use becomes more autonomous and socially grounded. For instance, predictive modeling could support collective planning or resource distribution, not just advertising.
At the level of subject formation, artificial communicators can support users who resist profiling and critically engage their own mediated expression. Interfaces could be retooled for dialogical co-creation, emphasizing source transparency, interpretive flexibility, and critical prompting. When LLM use is anchored in non-commodified practices, such as inquiry, creativity, and shared responsibility, digital mediation can support the social anchoring of individuation rather than its alienation. The profilic self, when shaped through cooperative transaction, can reflect relational subjectivity embedded in solidarity and inquiry. In such contexts, the digital persona is not a market caricature, but a communicative presence rooted in ethical life.
Central to this transformation is the curator who frames, contextualizes, and reinterprets artificial outputs. Curation becomes a form of critical authorship that restores historical specificity, highlights marginalized voices, and embeds artificial communication in living contexts. In education, journalism, and community archiving, this curatorial work is crucial to remaking the norms of communication and grounding them in collective life. It is important to remember that language itself is not the data set ((Erdocia, Migge, and Schneider 2024); (Holborow 2018, 5–6)). What is extracted are patterns of language use, quantified and transformed through tokenization, annotation, and algorithmic processing for capital. Language as expression presupposes the retention of meaning, which depends on a community of meaning-makers who curate not only the outputs of LLMs but also the models themselves.
This vision aligns with Esposito’s insight into “control over control,” the need to manage systems that operate through meaning-independent procedures calibrated to produce plausible results without direct human understanding (Esposito 2022, xii–iv). The goal is not to eliminate opacity but to democratize mediation. Opaqueness can be reframed. The black-box nature of LLMs becomes a shared horizon for interpretation, subject to public accountability and collaborative use. Opacity, rather than an obstacle, becomes a condition for communal intelligence, where meaning is co-produced. This reclaiming of artificial communication for transformative praxis requires redistributing authorship, establishing shared control over institutions of communication, and recognizing the contributions of those who sustain the communicative commons. These include teachers, caregivers, translators, organizers, and artists, whose work is foundational but undervalued in capitalist systems.
Such transformation depends not on ethical principles alone but on social movements and cultural reorientation. Since LLMs are already objectifications of interdependent human activity, their reappropriation must connect technical design with collective empowerment. A digital commons can support language as a means to express, critique, and reshape social life. The future of artificial communication is not technologically determined but shaped by how people confront its limits and possibilities. The task is not to anthropomorphize algorithmic systems—artificial communicators are already humanized nature—but to transform the social relations they reflect. Like all technologies, LLMs bear the imprint of the society that produced them. Their transformation demands not just improved engineering but deep social change, embedding them in shared life grounded in cooperation, accountability, and solidarity.
Thanks to Elena Esposito’s concept of artificial communication, we now have a framework for understanding large language models (LLMs) for what they are: systems trained to detect and recombine textual patterns across large datasets drawn from human language. Their functioning depends not on internal cognition but on the capacity to generate statistically coherent continuations of prior discourse. In this sense, LLMs are artificial communicators.
Esposito’s concept brings out several important implications. First, it shows that intelligence is not necessary for LLMs to shape human practices. They shift writing from deliberative authorship to processes of prompting, selecting, revising, and reframing. Consequently, artificial communication becomes a technology of orientation. It reorganizes how people engage with language, how they express themselves, and how they interpret others. In doing so, it alters the conditions under which human subjectivity and becoming are shaped, favoring speed, clarity, and engagement over ambiguity, reflexivity, and slowness. These tendencies reflect broader social imperatives, particularly under capitalist relations, where communication is increasingly subordinated to exchangeability, optimization, and behavioral capture. LLMs, in this context, help mould not only expression but also our sense of self.
Second, Esposito also surmises that what has become “smarter” is not the machine, but society itself, which has invented a new mode of communication. This reframes the LLM as an objectification of human powers. The capacities that LLMs exhibit—the manipulation of syntax, the retention of information, the mimicry of genres—are expressions of accumulated human activity. They reflect the long processes of grammatical formalization, epistemic abstraction, and digital infrastructure-building. In this way, LLMs instantiate what Marx called the humanization of nature, the realization of human capacities in shareable, sensuous forms. But under current social conditions, they also manifest estranged or alienated potential. Emerging within capitalist relations, LLMs do not simply support communication—they reorganize it around the imperatives of fluency, prediction, and commodification. Language becomes a resource for data extraction; expression becomes a site of behavioral engineering. The very powers LLMs embody are thus redirected toward ends that estrange us from ourselves. What is objectified is not only human capacity, but also the historically specific social relations under which such objectification occurs.
Third, Esposito’s concept clarifies the central problem of artificial communication: how to control what controls through a lack of control. This is not merely a technical problem of managing unpredictability, but a social and communicative challenge. Artificial communication emerges from and reinforces particular communicative norms, institutional priorities, and social relations. If LLMs are products of the humanization of nature, then responding to the problems they raise—problems of authorship, agency, and meaning—requires more than tighter regulations. This is where the notion of “humanization of nature” complements the concept of artificial communication. The problem Esposito raises requires transformative praxis, collective efforts to confront the alienating relations within which artificial communicators are developed and used, and to reorient artificial communication toward emancipatory possibilities.
Such praxis involves rethinking the conditions in which LLMs are embedded. Instead of privately owned systems optimized for engagement and monetization, artificial communication must be organized through democratically governed, transparent, and cooperatively managed infrastructures. Instead of privileging fluency and coherence alone, systems should support ambiguity, depth, and plurality. And rather than reducing human expression to behavior for prediction, we can foster shared meaning-making, rooted in real communities and histories. This also entails reclaiming the open-endedness and uncertainty that characterize meaningful communication but are typically suppressed in algorithmic systems for the sake of control and standardization.
In sum, Esposito’s concept of artificial communication shows us that LLMs are not external threats or autonomous intelligences, but reflections of our own powers, under current social conditions. They are realizations of historically mediated human capacities—and thus, both their problems and their potentials are ours. To confront the alienating effects of artificial communication is to confront the social relations that shape them. And to reclaim artificial communication as a site of collective expression is to reclaim ourselves, not as isolated users of a tool, but as participants in a shared world of meaning-making. This insight extends beyond LLMs. It offers a framework for understanding all technologies as dialectical sites of objectification and alienation, whose potential liberation depends on the transformation of the social relations they both express and reproduce.
If LLMs reflect the humanization of nature, then their future depends not only on technical refinement, but on the social and communicative transformation of the world they mediate. What is needed is a human self not fixed by profit-maximizing repetition, but formed through co-creation, shared authorship, and expressive freedom—a self whose becoming reflects the world we wish to build, not merely the one we have inherited.