Normative Issues of Affective Computing Introduction
Normative Issues of Affective Computing
1. On Affective Computing in the history of digitality or how modern computers became “emotional”
Two lines of development can be used to assess the extent to which computers have become “emotional” in the history of modern digitality. The first concerns the growing—and increasingly affective—significance of digital devices for both society and individuals. Over time, users began to integrate these technologies not only into professional contexts but also into their private lives, forming emotional attachments to them. The second line marks a shift within computer science itself, exemplified by the emergence of Affective Computing: digital devices were endowed with emotion-recognition capacities and other affect-related functionalities. Taken together, these two developments give rise to a history of attributions, some extending across entire discourses—such as those surrounding robotics and artificial intelligence. Central questions include whether, and under what conditions, users perceive technological artifacts as particular kinds of actors, or even as entities possessing subjective standpoints, sensations such as pleasure and pain, and emotional states (Dreksler et al. 2025; Weizenbaum 1977). This special issue of “Philosophy & Digitality” focuses primarily on the second line of development, Affective Computing, while this introduction also highlights the points at which the two strands of development overlap.
Computers are true shapeshifters. As “nearly universal” machines, modern electronic computers have evolved into many forms and functions, from large supercomputers in research facilities to television sets with 24/7 access to streaming services to pocket sized mobile phones and others (Haigh and Ceruzzi 2021, 3). In addition, computers have been able to synthesize many types of older media such as telephone, television, film, photography, and telegraphy into a widely distributed technological infrastructure. Modern electronic digital computers are extremely versatile media that break down barriers between leisure and work, between private and public space as well as bodies and their environment, with both good and problematic results.
Today, digital computers are no longer designed exclusively as technological tools for solving specific scientific, administrative, or military problems, as was the case when they were first conceived in the 1930s and 1940s (Galison 1994; Gugerli 2018). Computers are not just problem-solving machines, but are aligned with, or even connected to, the human body, and with this, indirectly, if you want to say so, to the human “soul” by processing individual sensory experiences. A standard smartphone alone is packed with more than a dozen microsensors and is at least as much a “measuring station” as a telephone (Gramelsberger 2023, 12–13). Based on data collected from various sources, computers started more and more to technologically detect, and, as some say, to “recognize” human expressions and behavior as representations of internal affects and emotional states (Picard and Klein 2002).
In the form of networked smartphones, computer technology has undoubtedly developed from expensive calculators for very few people in 1950 to affordable companions for many in 2025. This development was accompanied by a process of such devices getting more and more emotionally relevant for users (Turkle 2017). No longer seen as just “giant brains” (Berkeley 1949), but also as emotionally expressive and reactive devices, computers are now objects that are loved and hated in more intense ways than other technical objects. Over the course of a few decades, computers have become able not only to stimulate, but also simulate emotions as well as to detect them. At the same time they have become emotionally as well as socially significant in many ways (Weber-Guskar 2021). As miniaturized objects of our daily use, they carry personal meaning, for example when we archive messages from lost loved ones on them or when we are annoyed by them because work constantly reaches us through their communication channels. Computers are no longer just complicated and often seemingly boring high technology as they were in their early days, but they are emotionally charged.
However, for almost fifty years, from the mid-20th century to the mid-1990s, computers and emotions were—at least at first glance—separate areas of research. Therefore, detecting, processing or even simulating human emotions (their physiological precipitations or equivalents) are still relatively new areas of computer science. They were first introduced in the 1990s under the name Affective Computing at MIT in response to the “emotion turn” in neuroscience and have since achieved several milestones (Damasio 1994; LeDoux 1996; Picard 1997, 10–11), such as Rafael Calvo states in this issue.
According to Rosalind Picard’s early description of the approach, Affective Computing is “computing that relates to, arises from, or deliberately influences emotions” (Picard 1997, 3; Vgl. 1995, 1). Affective Computing experienced a major breakthrough around 2010, when facial recognition was advanced and more wearable sensor technology became available to measure physical changes such as skin resistance, thermal patterns, heart rate, and other emotionally significant parameters (Picard et al. 2016). In parallel with the development of the technological infrastructure, the research field was further established academically with influential journals such as the IEEE Transactions on Affective Computing and the Sage Journal Emotion Review (Institute of Electrical and Electronics Engineers 2010; Dalgleish et al. 2009), to name a few. Affective Computing is an interdisciplinary research approach at the intersection of computer science and the humanities, particularly behavioral sciences and emotion psychology. In 2014, Rafael Calvo and colleagues published the Oxford Handbook on Affective Computing, which highlights the strong connection between engineering on the one hand and behavioral research, especially emotional psychology, on the other hand. This standard work also identifies and considers the numerous areas of application—from learning to games to health care (D’Mello and Graesser 2015; Yannakakis and Paiva 2015; Bickmore 2015). In the 2014 introduction, the editors write accordingly:
„As we write, Affective Computing (AC) is about to turn 18. Though relatively young but entering the age of maturity. AC is a blossoming multidisciplinary field encompassing computer science, engineering, psychology, education, neuroscience, and many other disciplines. AC research is diverse indeed.”(Calvo et al. 2014, 1)
More than ten years later, AC has not only matured further, but has also grown in diversity. The approaches and applications of Affective Computing have become more differentiated since 2014, including ethical and normative issues (Ahmadpour et al. 2025). Not only have application bundles with different areas of significance established themselves, ranging from communication in psychotherapy and medical care (e.g. depression detection) to surveillance of public spaces, intelligent driving, and advertising to highly controversial interview programs in migration control (Sanchez-Monedero and Dencik 2022). But also, Affective Computing has merged with other fields, above all that of machine learning and AI.
In the words of the representatives of Affective Computing like Rana el Kaliouby or Ana Paiva, digital technology meanwhile should become “emotionally intelligent” and “empathic” ((Kaliouby 2017), (Paiva et al. 2021)) or at least “artificially empathic” (Misselhorn 2021)—an expression much contested in this context (Weber-Guskar 2022; Montemayor et al. 2022; Malinowska 2021). The discussion about empathy in media and technology use has by no means been limited to the field of Affective Computing, but in recent years has also included VR technology and artificial intelligence as a whole (McStay 2018; Nakamura 2020; Barbot and Kaufman 2020; Messeri 2024). In Affective Computing, however, the question of the desired or rejected combination, confrontation, or separation of human emotionality and affectivity on the one hand and technical systems on the other comes to a head (Tuschling 2014).
Affective Computing as an approach to computer development with multiple applications has become a discursive showcase as well for critically analyzing different forms of the relationship between human emotionality and digital technology in all its (normative) consequences, as well as for predicting massive changes in them. This special issue aims to offer a current cross-section of some of the observing disciplines involved. The focus is less on formulating a single position than on carefully clarifying some premises of the debates that have been conducted.
2. Normative Assessments of Affective Computing
The ability to experience emotions, to understand the emotions of others and empathise with them is still widely regarded as a uniquely human characteristic. Consequently, the measurement and processing of affect-related data and pattern-recognition of emotionality, which form the core of Affective Computing, represent, in the eyes of many observers, a transgression of one of the last boundaries separating man and machines (Goldie et al. 2011, 728). Unsurprisingly, Affective Computing raised normative concerns from early on, as it claims to develop “emotionally intelligent” technologies with multiple applications in many parts of the life world (Bösel 2021; Angerer and Bösel 2015; Picard 2003; Hudlicka et al. 1999). In the following, we will distinguish between ethical and conceptually normative concerns, in which the interdisciplinary approaches of this special issue both differ and converge. While ethical concerns focus on moral issues in the narrower sense, conceptually normative concerns relate more strongly to the media-technological conditions of Affective Computing, as we will now briefly explain.
2.1 Ethical Assessments
From an ethical perspective, the automation of emotion recognition is viewed particularly critically, probably not least because there are already so many possible applications or because these are potentially within reach of commercial use, ranging from the display of microtargeted advertising on the internet to the identification of potential terrorists and the improvement of driving safety to the recognition of frustration in the classroom or mental illness (Misselhorn 2021, 34-35).
One of the most widespread criticism concerns data protection and privacy issues. Access to emotional information means that the use of affective computer systems carries a high risk of privacy violations in various ways, especially for vulnerable groups, as demonstrated by crime prevention surveillance in public places or the (temporary, now banned in the EU) use of lie detectors in the administrative process of migration (Van Den Meerssche 2022). The invasion of privacy begins with gaining insights into emotional states that the person wishes to keep private (Weber-Guskar and Menges 2025) and becomes even more problematic when the recognition concerns emotional states that the person in question is not aware of or does not even want to think about – as in the case of emotions associated with trauma (C. Firmino De Souza et al. 2023). Such insights make it easier to manipulate people directly in their actions or indirectly by influencing their emotions or opinions.
Emotion recognition being used for self-tracking and emotional regulation are also critically assessed, as these practices can cause users to lose certain abilities such as emotional interoception and can lead to a certain kind of self-estrangement (Weber-Guskar 2024, 65–83).
Furthermore, in connection with emotion modulation in computers and simulation using avatars and robots, the question was raised as to whether this constitutes a form of deception or even betrayal that is fundamentally morally flawed (Coeckelbergh, 2012; Sparrow and Sparrow 2006) or whether, on the other hand, devices that perfectly imitate emotionality should simply be included in the moral realm (Coeckelbergh, 2010).
Yet some engineers were always aware of such ethical concerns and in the field of Affective Computing possible concerns and dangers were addressed early on (Picard 1997, 113–137). The first monograph on the Affective Computing approach even devotes a separate chapter to such concerns. The issues considered range from concerns about data protection and privacy to concerns about manipulation, to general questions about what can be considered the proper use of technology and what might be a harmful use of scientific innovations (Picard 1997, 113–137).
While data protection concerns need to be considered more urgently than ever in view of the widespread use of sensors, other questions that Picard raised in the early days of Affective Computing are currently dormant, have even been resolved, or abandoned. Examples for the last case are questions that one might call specific normative questions of Affective Computing design. Picard spoke about the “responsibility” of designers for computers and considered what emotion-like capabilities computers should have for their own sake. She asks whether robots should be enabled to hide their modeled emotions, or whether “rapid primary emotions,” which take effect in humans in existentially dangerous situations and override “cortical thinking,” should also be imitated in order to protect technical systems from danger (Picard 1997, 132). So far, there have been few attempts to modulate emotions in robots that are used in practice and in which a functional equivalent of emotions has been built in: for example, in a robot for a trip to Mars, where “fear” is supposed to make it behave very cautiously (Wolfangel 2019). Today, however, this feature would be seen as a matter of caution on the part of the robot’s owner and user, who do not want to lose the expensive tool in an avoidable accident. It is not about responsibility for the robot itself, which would make it a moral object.
Some Affective Computing papers and much industrial discourses and advertisements of products, though, fuel further concerns by claiming that computers not only understand emotions (smart eye 2025), but are even empathic (cf. (Paiva et al. 2021)), and can be emotional companions (cf. https://nomi.ai/: “An AI companion that with memory and a soul”) or that they will become emotional beings (see for an critical analysis of these rhetorics: Tuschling 2020).
In fact, major challenges arise today when dialogue systems such as the large language models currently freely available are emotionalized in their communication and accessible at all times. For example, there is a great deal of debate about the extent to which people can form emotional relationships with social chatbots that recognize emotions (so far mostly through sentiment analysis, but since GPT 4o, facial emotion recognition technology has also been publicly available), simulate emotions, and respond emotionally to the user’s emotions, and to what extent these relationships are beneficial or dangerous (Weber-Guskar 2024; 2022b; Nyholm 2020; Seibt et al. 2020). On the one hand, social chatbots reduce feelings of loneliness, trigger positive emotions in users, and can sometimes even help with mild mental health issues (Skjuve et al. 2022; Ta et al. 2020). On the other hand, they sometimes encourage suicidal tendencies, make people addicted (Laestadius et al. 2024) and recent findings show that they can exacerbate reality distortions and psychoses and trigger revival experiences (Maples et al. 2024; Østergaard 2023).
2.2 Conceptual normative assessments
Beyond the ethical evaluation of the use and effects of affective computer systems, there is a second normative dimension to consider: the conceptual dimension, which deals with the specific technical circumstances. The technical conditions of Affective Computing also include the scientific foundations of emotional behavior in terms of the normative conceptual dimension. In her seminal work, Ruth Leys showed how the field of emotion research changed in the 20th century and gave rise to pragmatic, but quite narrow concepts of basic emotions that is most prominently, but not exclusively linked with the works of Silvan Tomkins, Carroll Izard and Paul Ekman (Leys 2017). Together with his colleague Wallace Friesen, Ekman identified seven plus so-called basic emotions as part of the Advanced Research Project Agency (ARPA) at the Department of Defense (Tuschling 2014), which, as an innate set of emotional behavior, are said to form the basis for the entire spectrum of expressions, affects, moods and feelings. These are the emotions happiness, anger, sadness, fear, disgust, surprise and interest, contempt and shame (Ekman and Friesen 1975; Ekman 1999). Yet studies by Lisa Feldman Barrett and her team showed from within the field of emotion research that there is not sufficient empirical evidence for a reliable and specific connection between certain facial expressions and certain emotion types as well as that there is a lack of correspondence between the general assumption of a small set of “universal” expressions of emotions and individual emotional experiences (Barrett et al. 2019; Barrett 2006). In critically assessing approaches to Affective Computing and Emotion AI, various studies have examined the affinity and interactions between behavioral research, particularly in the behaviorist tradition, physiology, and emotion psychology, with varying emphases. The works in the history of science, philosophy and media research pointed out the problematic epistemological claims (Weber-Guskar 2024; Waelen 2024; Misselhorn 2021) as well as problematic origins and media biases (Crawford 2021; Tuschling 2020; Angerer 2018; Leys 2017; Weigel 2012) of Affective Computing.
Above we presented Rosalind Picard’s pragmatic definition of Affective Computing, according to which it refers to forms of computing that relate to emotions, originate from them, or specifically influence them. Here, we will use a more detailed definition of Affective Computing, particularly from the perspective of the humanities and cultural studies: Affective Computing refers to types of data transmission and processing that use physiological parameters and measurements and relate them to affects and emotions via pattern recognition in order to detect, model, simulate or stimulate them.
Clarifying the normative issues surrounding Affective Computing involves, among others, assessing whether this technology adequately represents emotions and how exactly it processes emotion related data. However, it also means, first and foremost, to ask what is referred to as affect and emotion in Affective Computing. Picard wrote she would use “affect” and “emotion” interchangeably (Picard 1995, 5). But in philosophy, psychology, and media science, there are numerous distinctions between these terms (Scarantino 2024; Slaby et al. 2019; Angerer et al. 2014). The discourse on empathic technology, reading, recognition, or even acknowledgment of emotions therefore already become the subject and part of the conceptual work. Otherwise, the ubiquitous, ill-considered references to affects, emotions, affect politics obscures the different approaches and assumptions.
It is striking how powerful a role the linguistic discourse plays in speeches about affect and emotion. The terms affect and emotion are semantic lubricants, rhetorical tools, and codes used to evoke certain positive or negative associations.
Thus, almost all variants of Affective Computing in external representations convey the impression that it is no longer just a matter of cold, distant technical appropriations of the living environment. Rather, Affective Computing suggests that it can make human-computer interaction more “humane” by digitally modeling the individual, their individual sensory experiences, and thus their individual access to the world (Hudlicka et al. 1999). This dynamic to describe computers as emotional beings is not only an attempt by the industry to prevent technophobic rejection of computer-assisted applications in the private life world, but also categorically problematic, since Affective Computing Models have to use various forms of pattern recognition and therefore abstract from the individual being, even it they were trained with personal data.
Empirical emotion research, in parallel, has undergone a major media shift over the past twenty-five years, with the result that image, sound, text, and haptic stimuli can now be made available for experimental research in the same way as data for training large artificial neural networks, particularly in the field of computer vision (Tuschling 2022; Crawford and Paglen 2019): Standardized image databases are created on internet sources, e.g. collections from Flickr or other open photo-sharing services, or have been compiled through web crawling (Mollahosseini et al. 2017). Furthermore, testing and rating stimuli in experimental settings has moved beyond the confines of the local laboratory and is now often carried out in behavioral research via micro-services such as Amazon Mechanical Turk and others, in order to expand the pool of test subjects and reduce research costs (Mollahosseini et al. 2017; Kutsuzawa et al. 2022). The most important shift however is the merging of Affective Computing (AC) and AI (Artificial Intelligence) in many forms and ways.
The focus of previous Affective Computing projects has mainly been on emotions in various cases of human-computer interaction. Today, Affective Computing is a complex, highly connected sensor-based technology embedded in many technologies and environments such as smart cars, AI-based robotics systems, surveillance systems, advertising research, social media and many others (MIT Affective Computing Group 2025).
At this moment, though, when Affective Computing is increasingly merged with AI, works are appearing on the possible feelings and experiences of AI systems, and therewith on their rights and their responsibilities. Most of these works are highly speculative, but written in a time of transition, where most researcher do not dare to exclude a lot of possible developments for the future, because so much more is already possible which seemed impossible not long ago.
3. Affective Computing and the Question of Consciousness
Unlike artificial intelligence in general, Affective Computing has not yet directly aimed at the strongest version of itself, namely the development of machines with conscious affective states and emotional experiences (Picard 1997, 60 f. See also Rafael Calvo in this issue; Toda 1962). When it has focused on the generation of emotions, this has meant modeling certain functional features of emotions, such as their connection to actions and evaluations (Ortony et al. 2022), but it has not aimed at what is considered the core of genuine (as opposed to artificial) human and animal emotions, namely the felt quality of emotions, which is a conscious experience.
At the end of the 19th century, a controversy arose over the scientific and technical representability of individual experience and feeling, which is primarily associated with Emil Du Bois-Reymond, the renowned physiologist and natural scientist. The controversy, also known as the Ingoramus et Ignorabimus dispute, dates back to a lengthy public lecture of him in 1872. Du Bois-Reymond defined two boundaries of modern science that are still being discussed today (Du Bois-Reymond 1912). One limit to the knowledge of nature is the origin of matter and force, the other is consciousness and feeling. This second limit to the knowledge of nature—that of consciousness and feeling—is gaining new relevance in view of the advances in artificial intelligence and the debate about sentient large language models (Chalmers 2024; 2023). Du Bois-Reymond mentions not only cognition and conscious perception as characteristics of consciousness, but also feeling (as does Antonio Damasio and others, today, too). Against the radical physicalism of his time, which in his eyes wanted to declare consciousness and a sense of self to be epiphenomena of purely cognitive brain activity, he states:
“What conceivable connection exists between certain movements of certain atoms in my brain on the one hand, and on the other hand the facts that are original for me, cannot be further defined, cannot be denied: I feel pain, feel pleasure; I taste sweet, smell the scent of roses, hear the sound of an organ, see red, ...” (Du Bois-Reymond 1912, 458).
It is precisely this boundary of recognizing nature in the form of consciousness, feeling and perception of the self that modern Affective Computing and Emotion AI seem, at first sight, to be overcoming or at least making surmountable.
However, Du Bois-Reymond’s speech serves less to pose the general question of technical simulatability and the intentional or systematic limits of science and artificial intelligence, which has been linked to his scientific imperatives to acknowledge these boundaries with a pledge to Ignoramus et Ignorabimus—we don’t know and won’t know—and discussed controversially (Bayertz 2007; Wahsner 2007; Bieri 1994; Nagel 1974; Du Bois-Reymond 1912, 464). Rather, the exact wording of Du Bois-Reymond’s speech reveals general basic assumptions about feeling, emotion and sensation that are virulent in the discourses on Affective Computing and Emotion AI. Du Bois-Reymond declares pain and pleasure, taste, smell, sound and color to be inescapable, unconditional given presuppositions of individual perception and experience that support our concept of individuality and self (Du Bois-Reymond 1912, 458). The physiologist links emotion, feeling and perception in an almost romanticized way with an immediate analogous experience, as we would say today.
Du Bois Reymond’s framing of emotion as scientifically incomprehensible—and therefore technically unrepeatable—is, however, what makes it possible to divide the various normative conceptual assessments of Affective Computing and Emotion AI: Are emotion and affect unquantifiable, non-discretizable and therefore non-computable? Or do Affective Computing and Emotion AI emulate what emotion and affect are sufficiently well? Or—a perspective pursued in some of the papers presented here—do we need to define more precisely what modern technology does and how it emerged, before we can evaluate it?
Affective Computing has often been interpreted in humanities and cultural studies as a possible overall vision of digital culture. From this macroperspective, criticism often focuses on large-scale normative questions about technical emotionality or empathy as a whole, about manipulation, deception, and surveillance. While these questions are of continuing importance, the impression nevertheless arises that the actual significance of Affective Computing for computer development is being missed and slipping away. While it cannot be ruled out that the one, totalizing killer application will still be developed, updating all dystopias and deception scenarios, after years of latency, the impression is rather that Affective Computing will not be THE one formation, but rather, unlike but not dissimilar to AI—and above all with AI—has developed into an essential particle of the texture of digital culture (see for a definition of digital: Tuschling 2025). It is not the grand scenario of control and general manipulation by Affective Computing that has become reality, but rather the small yet all the more solid integration of users into their digital environments and obligations.
Affective Computing has not only expanded as a mature approach. Rather, it has become woven into platform culture and last not least into AI, without simply being replaced by it. After all, Affective Computing has been combined with artificial neural networks and, in short, has merged with AI discursively, methodologically, and technically (Zhang et al. 2025). Side strands of Affective Computing-research include computer vision and certain cultural significant changes in expressing emotions in verbal, written, pictorial and other forms (Kutsuzawa et al. 2022). In general, Affective Computing has moved to the center of attention as a not insignificant building block of AI (Crawford 2021) (see Campolo in this issue).
The convergence of Affective Computing and AI raises questions about the normative implications of these technologies with new acuity. This special issue responds to this and brings together contributions that examine why and how Affective Computing should be classified, evaluated and problematized from various disciplinary perspectives.