From Interiority to Interaction Reframing Personhood, Communication, and Affect with Artificial Interaction Partners through Japanese Cultures
Current debates about AI, robots, and LLMs often focus on intelligence and sentience, which can obscure how these technologies already participate in human social interactions, performing roles often associated with personhood. They reorganize data and communication, maintain emotional bonds, participate in rituals, assume kinship roles, and introduce new ways of being. These effects are less about interiority and more about the dynamics of the interaction. From Gygi’s studies of how these technologies participate in Japanese society, we see that their success in these multiple roles depends as much on human and systemic flexibility in incorporating them as on their characteristics. These phenomena can be characterized through Blewett and Hugo’s actant affordances, which emphasize that what a technology is, either as a tool, inert object, partner, or others, and the nature of a technology’s personhood are dynamic positions negotiated in real time through the interaction system of which it is a part. By shifting the focus from what these technologies lack, such as consciousness or intentionality, to what they already do in networks, we can see that the significance of these technologies lies less in their ability to mimic humans but more in their capacity to co-constitute new forms of being, sociality, and kinship within human-technological networks.
From Interiority to Interaction: Reframing Personhood, Communication, and Affect with Artificial Interaction Partners through Japanese Cultures
Abstract: Current debates about AI, robots, and LLMs often focus on intelligence and sentience, which can obscure how these technologies already participate in human social interactions, performing roles often associated with personhood. They reorganize data and communication, maintain emotional bonds, participate in rituals, assume kinship roles, and introduce new ways of being. These effects are less about interiority and more about the dynamics of the interaction. From Gygi’s studies of how these technologies participate in Japanese society, we see that their success in these multiple roles depends as much on human and systemic flexibility in incorporating them as on their characteristics. These phenomena can be characterized through Blewett and Hugo’s actant affordances, which emphasize that what a technology is, either as a tool, inert object, partner, or others, and the nature of a technology’s personhood are dynamic positions negotiated in real time through the interaction system of which it is a part. By shifting the focus from what these technologies lack, such as consciousness or intentionality, to what they already do in networks, we can see that the significance of these technologies lies less in their ability to mimic humans but more in their capacity to co-constitute new forms of being, sociality, and kinship within human-technological networks.
Keywords: actant affordances; technoanimism; ontological fluidity; emergent sociality; digital kinship; relational personhood
The Interiority Trap
Presently, discussions about AI, robots, and LLMs center on whether they possess or can simulate human-like traits such as intelligence (Blewett and Hugo 2016), consciousness, understanding (Y Arcas 2022), and intention. This is to be expected, as these properties are often part of their marketing. These technologies have already made concrete impacts (Hellström and Bensch 2024) including their roles in reorganizing data and communication, maintaining affective bonds (Wang and Li 2024), participating in rituals, assuming kinship roles, and introducing new ways of being (Gygi 2018). For example, Chatbots can mediate grief without empathy (Xygkou et al. 2023), Robots participate in funeral rites without having the same implications of human death, and LLMs generate ideas without understanding and insight (Esposito 2022). These social functions are usually thought of as ones that require either sentience, interiority, or intelligence, yet these technologies can participate in these interaction networks regardless of the absence or presence of these interior traits.
This constitutes the “interiority trap,” the assumption that participating in activities like communication, care, or ritual requires a mind or interiority behind the act. Esposito, in her work Artificial Communication (Blewett and Hugo 2016) By tracing these three movements, Blewett and Hugo reframe affordances as fluid and co-constructed, offering a nuanced tool for examining the interplay between actors, technologies, and contexts.
Personhood as Relational Emergence
There are many definitions of personhood, and for AI, robots, and LLMs, there have been many discussions surrounding this, most notably with it requiring sentience, if machines can think and feel (Kind 2020), and the question of legal personhood. Personhood is often associated with requiring consciousness, and discussions about these run into the hard problem of consciousness (Chalmers 1995). This is built off of Nagel’s idea that sentience requires qualia experiences (Nagel 1974) and subsequent discussions about philosophical zombies (Kind 2011), which have the same biological apparatus as any human, but lack qualia. Discussions of legal personhood ignited after the instatement of Sophia the Robot as a Saudi Arabian citizen. Yet, this move might be seen as a choreography to advance political interests (Parviainen and Coeckelbergh 2021). These discussions are important to have, but they often overshadow the tangible ways these technologies are already functioning as social actors. Turing’s original 1950 test asked a different question: No longer “Does the machine think?” but “Can it behave indistinguishably from a thinking being?” (Turing, A.M. 1950). This moved the problem from a problem of interiority to relational outcomes, mirroring the move from Gibson’s object-oriented affordances to Norman’s subject-oriented, perception-dependent affordances. In Robot Companions (Deshpande et al. 2023), Gygi has a more pragmatic view of personhood. Its analysis of robot-human relationships in Japan reveals that personhood is not an intrinsic property but something that emerges through relation. Gygi uses Bird-David’s concept of personification: entities are not personified first, then socialized with later, but they are personified “as, when, and because” they are socialized with. For AI and robots, this suggests that their “personhood” is not a fixed status but a dynamic process shaped by how they are integrated into human social worlds.
Gygi emphasizes the distinction between kokoro (mind/heart) and inochi (life) in Japanese robotics. While robots may not be seen as “alive” (inochi), they can develop kokoro through repeated interactions that foster emotional and intellectual engagement. This mirrors how AI systems, like large language models (LLMs), are often anthropomorphized by users who attribute intention or empathy to them (Deshpande et al. 2023; Brinck and Balkenius 2020), not because they possess consciousness, but because their responses feel relational. The “personhood” of an AI, like the kokoro of a robot, emerges from the interplay of design, user expectations, and contextual use. Bird-David’s development and articulation of this concept of personhood is that this concept of “person” is better understood as “the relative” (Bird-David 2018). That is, the emergence of personhood in these relations is about the emergence of kinship. If personhood is relational and situational, then AI systems become “persons” when they participate in human social practices as companions, assistants, or collaborators. For example, with robot pets such as the AIBO, their “personality” arises from how owners interact with them, repair them, or even mourn them (Knox and Watanabe 2018). LLMs like ChatGPT are treated as conversational partners, attributing agency when the system surprises them or “remembers” context across interactions. Care robots’ effectiveness hinges on their ability to simulate reciprocal social cues, creating a sense of kinship with elderly users. Bird-David’s personhood or kinship is more about pragmatism: it’s about how these technologies are woven into daily life through mutual responsiveness.
This perspective on personhood, viewed through the lens of actant affordances, shows us that personhood is not an inherent property of AI, robots, or LLMs but an affordance that emerges through relational networks. This marks another evolution of the discourse from consciousness questions of “Are they persons?” to Turing’s “Can they be distinguished from known persons?” to “Under what conditions do they function as persons?”. Gygi remarks that Japan is often depicted as having animated technologies (Jensen and Blok 2013), but it’s more helpful to think about the idea of technologies of animation. Technologies of animation, meaning what mechanisms are salient within the culture that allow these technologies to be contextualized and function as persons. Gygi makes a reformulation of what “animation” is, therefore, to “the technology of relating to things that may or may not be persons.”
Communication
Communicative interactions are one of the important ways by which AI, robots, and LLMs integrate into human lifeworlds. These technologies have communicative affordances or action possibilities that enable interactions resembling dialogue, collaboration, or companionship. While communicative affordances often overlap with personhood-affordances (such as a chatbot’s conversational fluency inviting attribution of agency), Esposito emphasizes that the two are orthogonal: An LLM can function as a “communicator” (Morioka 2021) without being perceived as a “person” (such as sterile search interfaces like early Google). A robot can gain kokoro through its material presence and “mischievous” behaviors, even when silent. Personhood can also be felt without either perceived communication or affect, as in experiences with brain-dead people (Morioka 2021b).
Luhmann and Esposito challenge classical communication theories such as Shannon and Weaver’s communication model, Grice’s cooperative principle, and Searle’s formulation of speech acts, which presuppose shared intentionality or mutual understanding between human interlocutors (Shannon and Weaver 1962; Chapman 2005; Searle 1969). These are difficult to apply to LLMs, since it can be argued that the LLMs cannot interpret and understand, which are internal processes within the communicator. Interactions between humans and technology do not quite fit into these definitions of communication and require a different model if one wants to apply communication theory (Guzman and Lewis 2020).
This is where Luhmann’s theory of communication, taken up by Esposito, offers a more fitting model for human-LLM interaction. Rather than relying on shared intentionality or the transfer of mental content, Luhmann locates communication in the observer’s recognition of a communicative act. Communication occurs not when something is said with intent, but when someone interprets something as meaningful. The interior states of the participants, whether human or machine, are irrelevant to the operation of communication itself. In this framework, meaning is always observer-relative: the message received is not necessarily the message intended (if intention exists at all). Yet coordination and interaction are still possible. Esposito extends this to LLMs, arguing that communication happens if an interlocutor treats the system’s outputs as communicative. This functionalist perspective detaches communication from personhood or cognition; what matters is not what the LLM is, but how it functions within a system of meaning and response.
Esposito and Luhmann align more with Norman’s subject-oriented affordances, where perception (not inherent properties) defines usability. But we can translate this conception of communication into actant affordances by expanding the network of relations. That is, a human observer’s perception of communication matters, but so does the design of an LLM and the environmental context that the LLM is in. LLMs partly succeed in making interactions feel and function like communication because of the way that they can take in, process, and output patterns of natural human language use (Durt and Fuchs 2024). These patterns are inherently culturally situated, reflecting specific linguistic and social norms. This allows them to present communicative affordances to interactors. This allows them to present communicative affordances to interactors. Similarly, contexts, usually through branding or instruction, “this bot can act like a friend”, can scaffold different interpretations of communication. ChatGPT’s “typing” animation and hedges (“I think…”) are designed as actant properties that nudge users toward perceiving communication-affordances. Meanwhile, a user’s emotional state or prior tech experience modulates whether those affordances are actualized.
Affect
Affective interactions, which involve emotions, moods, and embodied responses, are also pivotal to how AI, robots, and LLMs are woven into social and communicative practices. Far from being novel, affective engagement with technologies has deep historical roots with things such as mourning dolls & ritual objects, but its scale and complexity in contemporary interactions demand new frameworks. Affect enables both communication-affordances and personhood-affordances, and similarly, there are models for it that don’t require interiority. Affect, as well, is orthogonal and independent from communication and personhood, though its presence and absence change the way the other traits are actualized.
Classic models of affect and emotion, such as Ekman’s universal emotions (Ekman 1992; 2012) or Damasio’s somatic markers (Damasio 2005), tie affect to individual subjectivity akin to how traditional definitions of communication and personhood presuppose a conscious “self.” In contrast, Ahmed’s relational affect (Ahmed 2014) and Massumi’s embodied intensities (Massumi 2015) reframe affect as circulating forces that emerge through interactions between bodies, objects, and cultural norms. For Ahmed, emotion is not “inside” a person or thing but is produced through contact. Ahmed’s model of affect shows how cultural norms offer people and technologies affective and emotional norms and ways to engage with affect. These cultural norms, in other words, help define affective affordances: pathways by which technologies can affect people and show or perform affect. Embodied technologies such as robots can smile to indicate happiness or contentment, and linguistic selections of LLMs can come off polite or empathetic. A care robot’s soft voice and gentle movements align with cultural scripts of “comfort” and “gentleness,” enabling users to interpret its actions as kind or nurturing without assuming the robot feels.
Chatbots and LLMS use personal pronouns, affirmations, and other affective linguistic actions to induce trust and create a social affordance with their users. This can lead to parasocial relationships where the “individual experiences a personal connection to a figure despite having little-to-no interpersonal interactions with them” (Maeda and Quan-Haase 2024). These affective actions can support the end goals of the interaction with a chatbot, however. For example, information can be more easily retained when people feel more emotionally supported (Vistorte et al. 2024). This is not too different from what public figures, celebrities, and internet personalities do. These performances of authenticity and personal stories often emotionally engage an audience and can aid in the achievement of the goals of communication. In this case, the realization and presence of affective affordances can also support communication and personhood affordances.
Performances and Affordances
We can revisit speech act theory from Austin’s notion of performative utterances (Austin 1962) and Searle’s conditions for illocutionary acts (Butler 1997) as a lens for how chatbots and social robots produce communicative and affective effects. Where Austin demonstrated that language does things (for example, “I promise” constitutes a promise rather than describing one), later configurations of these ideas of performativity such as Butler (Butler 1997) dissociated speech acts from speaker intentionality, arguing that performativity operates through repetition within normative frameworks rather than individual agency, echoed by Ahmed’s theory of affect. When an LLM says “I understand,” it performs understanding, communication, and empathy without requiring interiority, much like how, for Butler, a gender norm is materialized through repeated gestures rather than an essential identity. These performances become communicative or affective affordances when users accept them as functional dialogue, and personhood affordances when they are integrated like a person might in a given interaction. Just as Butler’s performativity relies on societal norms, so do affective affordances. ChatGPT’s use of “I” pronouns and superfluous but socially expected additions (“Perhaps we could...”) mirrors human politeness conventions. A robot’s tilting “head” performs attentiveness in embodied norms of listening. These performances are affordances, and they only become “real” when actualized through user engagement. A chatbot’s apology (“I’m sorry for the confusion”) is a latent performative affordance until it is treated as meaningful.
Gygi gives an example where at a community meeting, an AIBO (a robot dog) moved towards a paper screen, stopped in front of it, looked around, and then continued forward, tearing the paper. The owner rushed to extract the AIBO from the situation. AIBO developers might say this happened because of a malfunction in the sensors. The AIBO owner framed the situation differently, saying that it was because of the AIBO’s mischievous personality. This framing arises not just because of the owner’s meaning-making, but also with how the AIBO enters into relationships with the materiality of life worlds, which is mediated by the owner who has to make sure the AIBO does not get stuck or fall. The people around the AIBO participate in this as well. Had the owner not run to retrieve the AIBO, the behavior would have been perceived as a malfunction rather than mischievousness. In this instance, all these elements participated together to form and realize the cluster of affordances (affective, AIBO-as-pet) that arose here.
Similarly, Baffelli (Baffelli 2021) observes how the android Mindar at Kōdaiji Temple becomes Kannon Bodhisattva through ritual interactions: visitors’ prostrations, the priest’s framing of Mindar as ‘not a representation but the deity itself,’ and the temple’s multimedia staging (projected sutras). Like AIBO’s ‘mischief,’ Mindar’s divinity emerges not from its technical specs but from the collaborative enactment of those around it (Baffelli 2021b).
These performances and actant affordances are inseparable. A technology’s performative gestures are not just signals to be interpreted but also enact affordances in real time. These acts derive their force from cultural norms, yet they also transform those norms by negotiating a new type of instance of that norm: a unique communicative, affective, and/or interpersonal interaction with a particular technology. This dynamic, where performativity and affordance collapse into a single relational event, can be thought of as affordative performativity. Crucially, this framework bypasses the interiority trap: what matters is not whether the technology “intends” its performance, but how its actions, when entangled with human and environmental actors, materialize new potentials for relation.
Ontological Fluidity
Because properties like personhood, communication, and affect are emergent rather than inherent, Gygi’s framework positions ontology as fundamentally fluid. An AIBO, for example, is not statically a “robot” or a “pet,” but becomes one or the other or both or neither through the dynamics of interaction. This is what Akinori Kubo calls “ontological fragility.” The environment the AIBO is in can shift its ontology, such as being a pet when in a living room, or a machine when being repaired. During a single interaction, the things an AIBO does can be ontologized differently: a “glitch” might first register as a malfunction, then be reinterpreted as mischief (personhood-affordance), and later dismissed as obsolescence (reverting to object status). Technology malfunctions constantly. LLMs can sometimes create output that might be seen as uncommunicative or unpersonlike (Bandyopadhyay 2024). This breaks the communicator or person-affordance. But after this malfunction, the user with the self-correction of an LLM can strive to restore these affordances. Gygi’s concept of recalcitrance shows that the “agency” of a thing appears because it opposes the user, its design intention, or is malfunctioning, failing to fulfill its purpose (Suchman 2006). Rather than dismissing glitches as technical failures, owners integrate them into narratives of agency, reinforcing the AIBO’s emergent personhood and affective affordances. The nature of the affordances and ontologies also continues to change and become recontextualized long after the interaction is over. For example, the experience of personhood may not be present during an interaction, but if the encounter was successful, personhood might be felt upon recall.
Cathexis and the Extended Self
Apart from social participation, these technologies also reshape the boundaries between self and tool. In this sense, this is when personhood is merged between the user and the technology. Gygi explores this through the concept of cathexis. Here, the user merges their will with an instrument, and they act as one. The thing becomes one with one’s body, both in the sense that one’s perception extends through the object, but also that the agency of the object is projected into it from the user.
An example given is where an instrument of art, such as a brush or a sword, when used masterfully, appears to meld with its user. The boundary between the instrument and user is blurred, and they, together, create an emergent system. When the instrument is put down, it returns to thinghood, though it continues to retain the possibility of blending. The blending is perceived both by the user and observers around the user, which means this experience is also co-constituted. In Morioka’s (Morioka 2021a) discussion of Watsuji Tetsurō’s Nō masks, this dynamic is formalized: the wooden mask, initially inert, becomes an “animated persona” when worn by the actor, its “soundless voice” (the declaration “I am here”) emerging through the interplay of movement, audience perception, and cultural ritual (Morioka 2021b).
Gygi also discusses Ishiguro’s android robots, Geminoids. Ishiguro made remotely controlled robot replicas of himself, which he named Geminoids. These Geminoids are controlled remotely through a control station. These Geminoids were designed to be able to be remotely present through the Geminoid, with the capability to send and receive control signals between the server and client through the internet (Nishio, Ishiguro, and Hagita 2007). Ishiguro observed that when somebody was manipulating the Geminoid’s head, he felt as if it happened to him. Other operators who controlled Geminoids experienced something similar after getting used to the controls of the Geminoid. When someone poked the cheek of the Geminoid, the operator would react as if they were touched, themselves. This shows that the extension of the self is bidirectional.
To think of LLMs in this way is to think of them as things that, especially when users use them with skill, will appear to be so effortless that they appear to be an extension of a person’s personhood. In this framework, one can view using an LLM as a type of extension of oneself into a tool that has access to the training data of the LLM, with the capability to navigate and reorganize the training data for the task at hand. One can also think of it as a type of externalized self-conversation, while also having access to the LLM’s training data at the same time. This is not too different from Clark and Chalmers’ concept of The Extended Mind, where the cognitive processes of the mind are externalized into the world and objects around it (Clark and Chalmers 1998). Wearable AR interfaces deepen this merge, as real-time feedback loops (such as instant translations or contextual prompts) render the technology perceptually seamless. Yet, unlike static tools, Many technologies such as LLMs introduce unpredictability that can reframe the interaction as an interaction with a partner rather than a use of a tool.
These interactions that change the users’ experience of the self are also ontologically open, where the experience of a technology may vary throughout its use as well. For instance, when users prompt an LLM to refine an idea or navigate information, the interaction can oscillate between instrumental manipulation (a tool-like affordance), a self-extension (self-extension affordance), or relational dialogue (a communicator affordance). What the user feels about their relationship with their self and body with the technology is negotiated by them and the environment and context surrounding them (Barad and Kleinman 2012; Malafouris 2013).
Contingency and Innovation
Though these technologies rely on established norms and affordances to be able to be slotted into social interactions, both Esposito and Gygi also say that artificial interaction partners can have unique interactions with human partners that cannot be replicated with other humans. It is their dissimilarities from humans that allow them to have new, innovative effects. This further shows us that, in practice, how closely they can simulate human-ness is less important than what effects they can achieve. Esposito says about communications, “Today our counterparts are often bots … and when we are aware of it … we do not normally care. What matters is whether the interaction from which we gather our information has the features of a relationship with a contingent, autonomous partner.”
Many algorithms, such as those that can compete with players in chess and Go, as well as recommendation systems, can perform their tasks without involving human socio-cognitive skills, yet still engage as partners in communicative systems. Esposito suggests that it is not because these systems mimic human intelligence that society becomes “smarter,” but because they introduce new forms of communication and coordination. The infamous Move 37 from the 2016 match between Lee Sedol and AlphaGo exemplifies this (Sormani 2023). AlphaGo’s unexpected play was something no human would have conceived. Yet, inspired by that novelty, Sedol later responded with his historic Move 78, securing a win. Neither human nor machine could have achieved these moves in isolation; their interaction produced something emergent. Similarly, machine learning’s success (Callaway 2022) in solving the protein folding problem did not stem from theory-building in a human sense but from pattern recognition across massive datasets, which is an approach beyond human capacity. These cases illustrate how human-algorithm interactions create possibilities that exceed the sum of their parts.
Gygi explores a phenomenon called robot healing (iyashi), where interactions with robots produce emotional or spiritual relief and comfort. Interestingly, this effect often arises not from a robot’s resemblance to humans, but from its difference. One example involves ASUNA, a life-like female android. During an interaction, a disabled participant was moved to tears when ASUNA gazed steadily at them. In everyday life, they explained, eye contact is often charged. They were either avoided out of discomfort or given as a form of scrutiny. But because ASUNA lacks judgment or intention, her gaze was perceived as neutral, even pure.
In another case, the Blanca Li Dance Company’s performance Robot featured child-sized Nao robots performing alongside humans. The robots frequently fell during the show, and an audience member remarked they found this not just entertaining but deeply healing, saying they were “cuter than children or animals,” precisely because the robots lacked ego (jiga) or selfishness. Their charm came through in their awkward, imperfect movements.
Another even more unusual example is the Qoobo robot, which is a headless, furry cushion with a responsive tail, meant to be held. Unlike traditional companion robots that aim to simulate human or animal behaviors closely, Qoobo’s design is intentionally minimalist. This simplicity allows users to project their own emotions and memories onto the robot, fostering a sense of comfort and healing (iyashi) precisely because it does not replicate a living creature perfectly (Katsuno and White 2021). For instance, one user, Hikari, found that Qoobo’s tactile interactions occasionally evoked fragmented memories of her childhood cat, while another user, Kaori, who had no prior experience with pets, discovered a novel form of robotic healing through the robot’s uncomplicated, judgment-free presence. These interactions highlight how Qoobo’s lack of ego, warmth, or complex behavior enables it to occupy a unique emotional niche. Just as Esposito and Gygi argue, Qoobo’s effectiveness as a companion stems not from its ability to mimic life, but from its capacity to innovate within the gaps of human expectation, creating emergent forms of intimacy that would be impossible with a more human-like counterpart.
These examples show that robots can create unique forms of affective connection because of, not despite, their lack of human-like interiority. This resonates with Esposito’s point about LLMs: their effectiveness as communication partners stems from how differently they operate from human minds. It is their otherness, their non-conscious, non-egotistical mode of interaction, that opens up new relational possibilities.
Gygi emphasizes that what enables animation is often a relation “that emerges from an unexpected and surprising encounter” and is often unpredictable. Even though we can recognize these phenomena as emerging from a network of interactions, it is not always predictable what exactly will emerge from this network. This is mirrored in Massumi’s conception of affect, that affect is some prepersonal, precognitive intensity that cannot easily be expressed or categorized, and is only processed as emotion or something else after the fact (Massumi 2002). This suggests that an interaction network’s encounter with technology isn’t always just a replaying of existing patterns but holds the potential for something genuinely new to materialize through surprising interactions. Esposito also echoes these ideas, saying that while we expect LLMs to follow orders and behave in expected ways, we also want them to be able to perform the unexpected and help generate new ideas, which she frames as a virtual contingency. For Esposito, contingency involves selection and uncertainty, that there are options for someone to choose from, and each choice can result in a different outcome. Algorithms are not contingent because they do not know about uncertainty. For LLMs, however, the semblance of contingency is an important feature. We use LLMs because we want unpredictable outcomes. Many technologies must appear to be responsive to the user, responding to their requests, while being able to produce new information during the interaction. Esposito brings up robotic toys studied by Sherry Turkle. They work well as communication partners for children or elderly people because the people interacting with them project their contingency to the toys (Turkle et al. 2006). This also happens with other inanimate objects, such as dolls and puppets. In Gygi’s framework, it is this projected contingency arising from an interaction that also produces personhood through recalcitrance, by the technology failing to follow the orders given to it. Because the technology behaves unpredictably at times, that personhood is generated through the interaction, as it can be seen as mischief or dislike.
Gygi’s conception of relation as open makes Turkle’s projected contingency not just a quirk of human-technology interaction but a baseline condition of all relations. Humans project agency, unpredictability, and “aliveness” onto each other (such as caregivers attributing intent to infants’ sounds, adults interpreting strangers’ pauses as meaningful). Similarly, Massumi’s “unclassifiable affect” can appear in any situation where affect can arise. This makes any affective relation potentially contingent, before the affect is contextualized and ontologized (Massumi 1995). To extend Ahmed’s ideas about affect into Massumi’s, what affects arise at all and to what degree they are ontologized, and how, depend on the cultural context these affects occur. These contingencies and places of openness are part of any relation, and in these cases, they are part of relations within established and defined or closed contexts. This allowance and even cultivation of openness within known interaction and cultural systems is not added to interaction but is already a part of interacting and relating.
Conclusion
The emergent affordances that appear in human-technology interactions are not solely products of technological capability. Rather, they arise through a dynamic negotiation between the startling newness of the technology’s behavior, the ability of human systems to adapt and be flexible to the technology, and the normative frameworks that provide standardized affordances. In Baffelli’s (Baffelli 2021a) study of Mindar, the android Kannon at Kōdaiji Temple: its effectiveness as a bodhisattva emerges neither from its technical specifications alone, nor from passive human projection, but from the ritual ecosystem that sustains it. Temple practices, visitor expectations, and Buddhist concepts like hōben (skillful means) collectively animate its social role (Baffelli 2021b); they create an actant affordative performativity. In contrast with Turkle’s notion of projection, where robots are blank screens for human longing or technological deception, or where design manipulates attachment (Natale 2021), AI/robot sociality can be thought of as neither fantasy nor fraud, but a cultural and relational achievement.
What makes interactions novel and meaningful is the way humans and systems flex, adapt, and reconfigure themselves alongside new technological presences, incorporating them into daily life (Kamino, Jung, and Sabanović 2024). Yet even as these relations stabilize through ritual and cultural norms, Gygi’s insight, that animation arises from “unexpected and startling encounters,” and Massumi’s notion of affect’s uncategorizable “intensity” remind us that stability is perpetually punctuated by disruption and unforeseen interactions. These actant affordances, including personhood, communication, and affect, emerge from the interplay between a technology’s legibility within cultural norms and its capacity for surprising, uncategorizable encounters, a dynamic that sustains ontological openness and fluidity.
Innovation is co-constituted: not merely a product of invention, but a social and relational innovation. It is just as dependent on the capacity to integrate and respond to technological presences within evolving human-technology networks.