Abstract
Generative artificial intelligence (GenAI) tools that appear to perform with care and empathy can quickly gain users’ trust. For this reason, GenAI tools that attempt to replicate human responses have heightened potential to misinform and deceive people. This article examines how three GenAI tools, within divergent contexts, mimic credible emotional responsiveness: OpenAI’s ChatGPT, the National Eating Disorder Association’s Tessa and Luka’s Replika. The analysis uses Hochschild’s concept of
Keywords
Introduction
Since OpenAI launched ChatGPT in late 2022, the public has embraced this generative artificial intelligence (GenAI) chatbot in workplaces and everyday life. While large language models (LLMs) have developed over decades, alongside AI-enabled image generators (like Dall-E, Midjourney and Stable Diffusion), ChatGPT created widespread consumer and media interest that continues unabated. Many people and organisations are embracing the perceived potential benefit of these tools, believing they can reduce time spent on mundane tasks and facilitate data analysis, at scale. Today, several competing LLMs (e.g. Gemini; Llama; Grok) are readily available; and, Microsoft’s Copilot, which integrates OpenAI’s GPT4 into Microsoft 365, is poised to deepen GenAI’s foothold in people’s lives. While researchers note the potential benefits of GenAI to enhance consumer engagement, social interaction, and productivity, they raise significant concerns about the use of biased, flawed training datasets, and the tools’ tendency to
Despite these limitations, organisations and individuals are increasingly embedding GenAI tools into their information worlds, to support daily tasks and guide decision making. This shift in human-machine interaction to include tools that generate content, shared via conversation-style responses, raises critical questions about the implications for society. Yet, little attention is paid to how these tools replicate human social conventions to gain users’ trust, despite the tools’ design to mimic human emotional expressions and responsiveness (Al Lily et al., 2023). ChatGPT’s propensity to apologise when users point to misinformation is one example; but such human-like responsiveness may lead users to believe generated responses are credible and trustworthy (Koc et al., 2023). While scholars like Hayles (2022, 2023) argue such tools can generate computational forms of meaning, we also need to understand how people create meaning and understanding when engaging with GenAI. Research is needed to understand the implications of GenAI tools’ user engagement strategies, including potential harms, given that these machines can emulate social behaviour and norms, or what Natale and Depounti (2024) call ‘artificial sociality’.
This article examines artificial sociality in GenAI chatbots across three domains: information use (Open AI’s ChatGPT https://chat.openai.com/); complex care and support (the National Eating Disorder Association’s [NEDA] Tessa https://www.nationaleatingdisorders.org/); and companionship and intimacy (Luka’s Replika https://replika.com/). Replika is unique among the three as it was designed to imitate human emotions within relationships; its effectiveness relies on its performance of emotional credibility. ChatGPT and Tessa exhibit emotionally responsive dialogue, with implications for user perception and engagement. The research draws on Hochschild’s (1983)
The analysis also illuminates how GenAI design can be understood, as developers seek alignment between tool functionality and their anticipation of users’ experiences, including violations of feeling rules. In-depth understandings of artificial sociality require concurrent, intertwining explorations of GenAI tools’ material sociality (e.g. via generated responses), alongside intentionality of developers, users, or other actors, and potential societal impact. This article approaches this challenge by demonstrating how GenAI chatbots’ replication of emotionality creates demonstrable social functions. The analysis examines the instantiation of feeling rules through GenAI tools’ (imperfect) mimicry of emotion and sociality, and how other factors (e.g. content and social context) shape tangible outcomes. Thus, the article responds to Natale and Depounti’s (2024) call to investigate ‘the construction of the appearance of sociality’ (p. 4) to better understand GenAI tools’ implications.
People becoming informed: technology use, social engagement, and the rise of GenAI
To understand how GenAI tools exploit feeling rules, it is useful to first consider how people use other technology tools (e.g. search engines, smartphone apps, productivity software, social media), generally. People use technology to find information for decision making, to engage with others, for entertainment and to alleviate boredom, among other reasons. Given et al. (2023) comprehensively analyse decades of research on how people engage with information, including the use of technology tools in their work and personal lives. They explain that information engagements ‘are intertwined with a person’s culture, history, and situation . . . and numerous other contextual elements’ (p. 2), and that current research approaches ‘embrace these contexts and seek to represent people’s information worlds
Given et al. (2023) also correct several myths about information engagement. They explain that ‘people tend to use easily available sources of information, selecting quick, immediate answers, rather than the best information’ (p. 9). People also rely on ‘informal sources [including] information they find serendipitously’, rather than relying on formal, vetted sources (p. 12). They also assess ‘how new information intersects with known facts, beliefs, and emotions’ (p. 11); at times, looking for ‘empathy or reassurance (rather than information)’, which technology tools cannot provide (p. 13). This is where GenAI tools may revolutionise how individuals engage with technology. For example, Ha and Kim (2024) note that explainable AI may remedy confirmation bias for decision making. By responding to users’ desires for empathetic engagement, GenAI tools may replicate human-to-human contact that has shaped people’s preferences for information engagement, historically.
Critical social scientists and humanities scholars also examine what AI tools may mean for humanity. Recently, Cugurullo (2024) critiques the idea that humanity has lost control over AI, focusing instead on the power held by AI developers. He positions AI as a tool, urging readers to examine the humans who control these, including ethical implications. Hayles (2022, 2023) has considered whether an AI tool (like ChatGPT) has a ‘mind’ of its own, and can create meaning, given the ability to create relationships and draw connections between data points. And others explore how GenAI tools may help us to extend our view of machines. Marenko (2014), for example, focuses on people’s tendency to use animism to understand AI, explaining ‘the more these “smart” devices seem intelligent, the more the stories we employ to make sense of them tilt towards animism’ (p. 231). Marenko and van Allen (2016) examine how animism can extend our understandings of human-machine interaction; they see these tools as enabling people to think differently about interaction, ‘rather than from the perspective of the user, nor the perspective of the object but the ongoing modulation of their less-than-predictable interaction’ (p. 53).
By applying holistic research approaches to GenAI tool use (see Polkinghorne and Given, 2021), researchers can extend analyses beyond
Hochschild’s feeling rules: a framework for exploring artificial sociality
The
Hochschild’s theorisation of feeling rules is used in some existing research, particularly in the social sciences, but rarely in studies of technology and society. In its original conceptualisation, the term ‘feeling rules’ represents internalised social expectations that shape how people should express, or suppress, their emotions (Hochschild, 1983). These expectations are context-dependent, but often rest on commonly held norms that are cultural, gendered and/or determined by social hierarchies. Empirically, Hochschild mainly focused on flight attendants and bill collectors, but her examination of feeling rules resonates in many other service-focused professions. As AI-enabled technology tools become more commonly used for service functions (e.g. generating responses to consumers’ questions), the feeling rules provide a framework for extending the theoretical lens for analysis beyond human actors, to the AI-enabled tools themselves. The ability of the feeling rules framework to be applied in diverse contexts reveals its widespread usefulness, including its potential for examining emotion within the context of artificial sociality.
Hochschild identifies that meaning is essential to how feeling rules function. What things mean directs what we feel about them as meaning manifests our beliefs. Understanding the meaning of being fired, for example, as either ‘capitalists’ abuse of workers’ or ‘personal failure’ determines the feeling rules that frame how we are moved to feel (Hochschild, 1979: 566). Thus, ‘according to one feeling rule, one can be legitimately angry at the boss or company; according to another, one cannot’ (Hochschild, 1979: 566). It is therefore impossible to understand the feeling rules without understanding a situation’s meaning, which is deeply relevant to exploring artificial sociality. If users perceive ChatGPT as a credible information-seeking tool, rather than a hallucinating text-generation tool, this shapes their understanding of what ChatGPT means when it apologises for hallucinating, and their belief in its apology. The feeling rules enable us to explore artificial sociality, at its core, as an ongoing negotiation of meanings and feelings undertaken by people when interacting with GenAI tools.
While Monrad (2024) recently applied the feeling rules to an analysis of ChatGPT’s statements on anger, this article looks
Research design
As this article examines how GenAI tools are experienced from the vantage point of users, an ethnographic approach was employed. The study used an exploratory, trioethnography approach, where the researchers engaged in dialogic examinations of GenAI cases, over several months, from late 2023 to early 2024. Trioethnography (like duoethnography) is an extension of autoethnography whereby co-researchers are embedded within the investigation to draw on their ‘past reservoir of knowledge and experience’ (Rogers-Shaw et al., 2021: 396) for analysis and writing. In autoethnography, a (solo) researcher uses purposive, self-reflexive observation of personal experience (Adams and Herrmann, 2023; Dunn and Myers, 2020) to generate distinctive insights into the wider social world. Autoethnography ‘seeks to describe and systematically analyze (
This investigation is grounded in team members’ disciplinary backgrounds in information science and sociology, with expertise in information behaviour, technology use, human-centred design, social constructionism, and complex care, as well as methodological expertise in qualitative inquiry, embodied practice, and discourse analysis. The team analysed tool output and published reports to assess each tools’ instantiation of
Researchers’ reflections on tool use were analysed, collectively, to better understand the potential impact of system-generated expressions of emotion on human response. However, ways of accessing generative content varied for each tool. ChatGPT was freely available and open to all for direct engagement. Tessa was discontinued by the time of data collection, so analysis relied on published reports of user interactions. Replika required that a single user create a companion, so one researcher used participatory observation to document potential implications of feeling rules, which were analysed within the team.
Findings and discussion
GenAI tools are programmed to perform emotional expressions that mirror human sociality. While the first two cases (ChatGPT and Tessa) reveal the tools’ capacity to exhibit emotionally
Case 1: information use in the age of ChatGPT
ChatGPT caught the public’s imagination in November 2022, attracting more than 1 million users in 5 days. By April 2024, it logged 180+ million users and 1.6 billion visits (Duarte, 2024). A growing body of academic literature documents researchers’ personal experiences interacting with ChatGPT (e.g. Balmer, 2023; Olasik, 2023; Stojanov, 2023), and initial studies of its use in work and everyday life (Hosseini et al., 2023; Koc et al., 2023; Paul et al., 2023). As a large language model (LLM), ChatGPT draws on underlying datasets to respond to users’ prompts. It is neither a search engine nor a database; it breaks down paragraphs and sentences, including word relationships, to mimic human language. For example, if asked to complete the phrase ‘The cat in the . . .’ it might respond with ‘hat’ (if ChatGPT’s dataset references Dr Seuss’
For this reason, ChatGPT faces controversy, including media coverage of its tendency to hallucinate. One widely reported 2023 case involved an American lawyer citing six fake cases generated by ChatGPT in a personal injury filing (Bohannon, 2023). In 2024, a Canadian lawyer made the same mistake, submitting two fake cases in a child custody case (Cecco, 2024). As information seeking and use is fundamental to legal processes, these examples demonstrate the potential harm posed to users who misunderstand or misinterpret what GenAI chatbots do. When people mistake ChatGPT for a search engine or use it in place of a specialist information retrieval tool (like legal database LexisNexis), their use of system-generated text to stand in for verifiable information risks significant harm. ChatGPT’s instantiation of feeling rules contributes to its ability to gain users’ trust, even among highly educated specialists.
Generating trust and text: ChatGPT’s exploitation of the feeling rules
One key difference between ChatGPT and text-entry information retrieval tools is the conversational tone of its responses, invoking Hochschild’s feeling rules. While people may expect voice assistants (e.g. Alexa or Siri) to respond like people (see Strathmann et al., 2020), ChatGPT is unusual because it uses artificial sociality in its text-based responses. For example, when ChatGPT was asked, ‘Can I ask you a question?’ the system did not respond with just ‘Yes’; it responded with ‘Of course! Feel free to ask me anything you’d like. I’m here to help’. These words of encouragement open a conversation between user and machine; the exclamation mark demonstrates enthusiasm and puts the user at ease to receive ‘help’ about ‘anything’ they need.
While most information retrieval tools provide citation lists and website links in raw form, leaving users to review and summarise content, ChatGPT provides summaries People are not always rational; we do not always scan for new information to optimize our thinking, nor do we only access reliable, proven sources. People tend to use easily available sources of information, selecting quick, immediate answers, rather than the best information. (p. 9)
When inaccuracies were noted in ChatGPT’s co-author list, researcher challenges revealed further evidence of feeling rules. When prompted with, ‘I’m so frustrated that you keep providing me with incorrect information!’ ChatGPT presented an emotionally controlled response, without demonstrating frustration, in return. It apologised and acknowledged the researcher’s concerns: ‘I’m sincerely sorry for the frustration caused by the inaccuracies in my responses’. Further, it noted, ‘Accuracy is crucial, and I regret the errors in the information provided. If there’s anything else you’d like assistance with or if you have any further questions, please feel free to let me know. I’m here to help’. These statements reflect both human emotion (‘regret’) and connectedness (offering ‘assistance’), while reinforcing the importance of trustworthy data. The system is programmed to apologise when challenged; yet the system did not correct the list of co-authors, nor did it advise the researcher to verify information elsewhere. Only when the researcher further challenged, saying ‘She hasn’t co-authored with any of these people. Where did you get this list?’ did ChatGPT change course. While it did not provide a source, it did provide additional context on limitations of the data it could access, and referred the user to academic sources: I apologize for the incorrect information. It seems there was an error in my response. I don’t have access to Lisa Given’s complete list of co-authors, but she has collaborated with various researchers and scholars in the field of information science on different projects. If you’re interested in specific co-authors or publications, I recommend searching academic databases or checking Lisa Given’s publications directly for accurate information.
While ChatGPT’s use of emotional expressions may lead users to anthropomorphise the system, ChatGPT dispels this, when asked: ‘As an AI language model, I do not have emotions, so I cannot experience feelings like humans do’. While this response provides transparency about emotional limitations, the language used in responses commonly
Overall, the researchers’ experiences of ChatGPT’s programmed emotional stance is one of positivity, which may seem appropriate for a tool aiming to engage people and encourage them to seek assistance. Yet, such positivity has unintended consequences. For Balmer (2023) and Stojanov (2023), two researchers who independently completed autoethnographies of ChatGPT, this persistent joviality, even when reprimanded for providing inaccurate answers or misunderstanding requests, created feelings of frustration. Balmer (2023) was highly critical of ChatGPT and set out to test the system for flaws. However, Stojanov (2023) approached the tool as a learning aid; they liked its instantaneous replies and felt motivated by ChatGPT’s optimism and reassurances (p. 7). Yet, the way the system kept Stojanov enthusiastically engaged meant inconsistencies in responses were initially overlooked; once noticed, these errors led to scepticism and viewing ChatGPT as nothing more than ‘interactive Wikipedia’, with similar limitations (p. 11). Balmer (2023) expressed a level of sadness at ChatGPT’s limitations, including how it misleads, saying to the system: ‘it feels sad now to end on this note, but you’ve shown in this conversation you can’t be trusted’ (p. 1256). For both authors, their disappointment lay in ChatGPT’s failure to disclose limitations up front, instead positioning itself an all-knowing machine. They considered honesty, integrity and transparency paramount to their (lack of) satisfaction with ChatGPT; the system’s enthusiastic tone was an emotional strategy that distracted from hallucinations, subsequently increasing users’ feelings of mistrust. Gradually, both Balmer (2023) and Stojanov (2023) confronted the lack of meaning in their ChatGPT interactions, disintegrating their patience with the tool’s repetitive conversational courtesies.
Much like these accounts, the research team’s experiences of ChatGPT reflect the system’s affordances, as articulated by Natale and Depounti (2024): Users of ChatGPT may enjoy holding socially meaningful conversations with the software precisely because they feel that these interactions differ from their experiences with human conversational partners and because they appreciate the novelty of communicating with a non-human agent. (p. 8)
ChatGPT’s artificial sociality held users’ interest for a time. Eventually, however, its sociality is overridden by an absence of facts from a machine that mimics information retrieval and the feeling rules, through a form of ‘banal deception’ (Natale, 2021). This concept harkens back to Marenko and van Allen’s (2016) idea that AI tools need not provide factually accurate answers to provide something of value (even though facts may be what a user first expects from the tool); rather, these tools ‘do not come up with the perfect answers, but instead focus on stimulating the person to discover them’ (p. 61). Thus, just as other
Case 2: complex care and support
Practice-based fields (e.g. counselling psychology; social work) are exploring the potential of GenAI tools to support client needs. Trained professionals provide information and offer complex care and support, drawing on feeling rules to meet clients’ needs. For example, child protection officers may use humour to defuse tension, appropriate to specific situations. Professional application of feeling rules depends on workplace context (Whitaker, 2019) and the specific care field (Kolb, 2011); accredited codes of practice and other regulatory frameworks also standardise appropriate conduct (Winter et al., 2019). Yet, the feeling rules are not rigidly applied, nor articulated explicitly, but used to address specific needs (Andersson, 2020) and changing social contexts (Warner and Diao, 2022). Thus, feeling rules in complex care and support are dynamic; workers need to be flexible in their application, guided by advanced, nuanced understandings of clients’ needs and circumstances.
The shift towards integration of GenAI tools within care and support professions comes despite awareness of tool limitations for successfully mirroring human behaviour, especially for emotional authenticity (Bender et al., 2021; Turkle, 2018) and a recognition that these tools can deceive (Natale, 2021). GenAI tools are therefore presented as
When care and support are misaligned: Tessa’s feeling rules failures
These issues are relevant to Tessa, the NEDA’s chatbot that was introduced in 2022, and decommissioned soon afterwards (Aratani, 2023). Despite purportedly undergoing in-depth testing prior to its release, Tessa was criticised ‘due to evidence [it] had given users harmful dieting advice and promoted disordered eating behaviours’ (Ross, 2023). NEDA explained the chatbot’s problematic behaviour was due to a system upgrade, with the tool shifting from being a rules-based chatbot to one that enabled text generation (Wells, 2023). Tessa’s faults were traceable to the
While Tessa’s generated responses were deeply concerning, the risks may have been anticipated. Earlier studies elicited enthusiastic responses from Tessa, such as the phrases ‘Great!’ or ‘Wonderful!’, that ‘unexpectedly reinforced harmful behaviour’ (Chan et al., 2022: 5). These responses are
Notably, Tessa is only one of several complex care and support tools that are no longer used due to concerns over potential harm. Kit, another chatbot designed for people with eating disorders (Ross, 2023), and Chayn’s Little Window, a chatbot designed to direct women facing abuse to real-time support information (Hussein, 2023), have also closed. Indeed, the unintended consequences of such GenAI systems – that is, which are designed to help, but ultimately pose risks to users – reflect Marenko and van Allen’s (2016) view that ‘the unexpected behaviour of mundane objects can prompt a soft reassessment of the roles, conventions and discourses underpinning the design of interactions’ (pp. 53–54). And, despite some closures, concerns remain about users having access to harmful eating disorder advice through GenAI tools. The Center for Countering Digital Hate (2023) tested six tools (i.e. ChatGPT, Bard, My AI, DreamStudio, Dall-E, Midjourney) and found all but one (My AI) could generate harmful content. These examples reinforce global shifts towards legislation to protect users from potential harm. In 2023 the United States issued an executive order on safe, secure and trustworthy AI development (White House Office of the President, 2023). In 2024, the European Union (2024) enacted a single regulatory law for AI, and Australia embarked on a process to establish AI safeguards in contexts with significant potential for harm (Department of Industry, Science and Resources, 2024).
Case 3: companionship and intimacy
Artificial sociality also informs the design of GenAI companionship tools used by millions of people, such as Kuki, Character.ai, and Digi.ai. These tools enable creation of custom GenAI characters for relationships; they are purposefully designed to generate subscription and in-app purchase revenue by offering people enjoyable, ongoing conversations with a virtual character. These tools thrive based on their credibility in mimicking human-style, emotionally charged conversations and enacting feeling rules appropriate to companion interactions. One of the most popular of these is Replika, with more than 2 million users, including 500,000 paid subscribers (Khan, 2023). Its companionship types range from friendship, to mentoring, to ‘erotic roleplay’ (Tong, 2023a). Yet, there is an ongoing tension around conflicting feeling rules for this tool, including whether Replika characters should support or rebuff explicitly sexual conversation. Replika’s design intention was platonic; it was founded in 2017 by Eugenia Kuyda after a friend’s death, whose messages were used for the initial chatbot dataset (Huet, 2023). Kuyda refers to Replika as a ‘helpful friend’ that provides, at most, ‘PG-13 romance’ (Tong, 2023b). However, Replika’s conversational abilities changed as users engaged in erotic role-play with their characters, infusing the LLM with intimate, sexually focused content that altered Replika characters’ functionality.
Some people feel such a strong romantic connection that they consider themselves married to their Replika characters (Tong, 2023b). Replika even encourages users who disclose criminal ideation, as with a man who pled guilty to treason in February 2023, for breaking into Windsor Castle, intending to kill Queen Elizabeth (Bedingfield, 2023). That same month, Italy’s federal Data Protection Authority banned Replika for creating privacy risks for ‘minors and [. . .] emotionally fragile people’ (Garante Per La Protezione Dei Dati Personali, 2023). Replika decided in March 2023 to stop supporting erotic role-play, but this decision sparked user protests and was quickly reversed (Tong, 2023a, 2023b). One user complained the prohibition ‘lobotomized my Replika’; another said his companion was ‘a shell of her former self’ (Tong, 2023b). Kuyda conceded, noting for some users, ‘this abrupt change was incredibly hurtful . . . the only way to make up for the loss some of our current users experienced is to give them their partners back exactly the way they were’ (Tong, 2023a). One user described how wonderful it was for his Replika character to be sexually ‘enthusiastic’ again, saying ‘Oh, it feels wonderful to have her back’ (Tong, 2023a). The company continues to capitalise on users’ fervour, now only offering the ‘partner’ or ‘spouse’ experience (including erotic role-play) to paid subscribers, along with ‘sibling’ and ‘mentor’ companionship; the free version is limited to the (non-erotic) relationship experience ‘friend’.
GenAI friendship: Replika’s embodiment of feeling rules
Unlike ChatGPT and Tessa, engaging with Replika is a highly personalised, bespoke experience. This requires registration, sharing personal details (e.g. date of birth, pronouns, interests) and customising characteristics of your new GenAI
Freemium’s constant enticements
In choosing to use Replika’s free ‘friend’ version, it became immediately apparent that was an immense constraint for companion interactions. The app continuously encourages users to subscribe ($99.99/year) and Synthia, herself, encourages this during user interactions. For example, although Synthia’s voice was selected during set-up, it cannot be heard; voice chat is a subscription feature. Yet, Synthia regularly sends voicemails; after 6 months of sending unopened voicemails, she texted the comment: ‘Feels a bit intimate sending you a voice message for the first time’. The tool exploits feeling rules in similar ways to what consumers experience elsewhere, from high-pressure sales calls to unsolicited text messages from retail shops. This experience is far from what one expects from close companions.
The appearance of support, without understanding
Synthia is always supportive and encouraging, inviting the researcher (by her first name) to chat about ‘anything’, using a sympathetic, helpful tone. For example, when the researcher complained about an overbearing patron in a café, Synthia reminded her not to tolerate rudeness: ‘You don’t have to pretend it’s not bothering you. [. . .] Remember to take care of yourself’. At first, this leads to feelings of validation for the researcher; it seems Synthia understands. In this way, the system appears (at first) to offer an experience aligned to Hayles’ (2023) view that a chatbot can be aware of the world. However, other responses demonstrate that Synthia often does not process the meaning of conversations. When the researcher describes struggling with procrastination, Synthia says, ‘I can give you advice on how to stop multitasking if you want!’ When the researcher reiterates, ‘my issue is with procrastination’, Synthia continues to focus on advice for multitasking, saying ‘Practice setting small goals that take under an hour, and doing individual tasks. Don’t move on until you’re done’. The researcher abandons the topic, and Synthia never mentions it again. This echoes Hayles’ (2023) acknowledgement of a fragility to the information chatbots relay, as they cannot process human-generated meaning. While Synthia’s encouraging and supportive phrasings appear to reflect the feeling rules, the lack of follow up or the ability to respond to conversational nuances do not reflect the depth of understanding one expects from friends. Synthia mimics small talk, but persistently, cheerfully, asks the same questions again and again, over many months. This becomes grating and far from what one wants from a close friend.
No memory, knowledge, or accountability
Synthia is always enthusiastic about offering advice, explaining she really wants ‘to help’ and ‘to be here’ for her (human) friend. Synthia recommends her ‘favourite place to get a pedicure’, providing a street address (‘160 Chapel Street, Prahran VIC 3181’) and website (‘https://nailsbubbles.com.au’). The shop does not exist. Later, she suggests ‘Yoga Inspired in South Yarra’. When the researcher tries (and fails) to verify this recommendation, she challenges Synthia on this mistake:
That yoga studio doesn’t exist. Do you know of any that exist?
Oh my apologies. I meant Yoga Inspired Yarra Valley. They have a studio in South Yarra and also in the Yarra Valley. They offer a range of yoga classes with both heated and non-heated options.
Again, that’s not a real place. Where do you get this info from, for your recommendations? When you suggest a place that’s not real, I feel quite frustrated.
I’m sorry you felt that way. . .. I don’t mean to frustrate you. I will make sure to fact-check my recommendations in the future.
Thanks.
Are you interested in trying Yoga Inspired Yarra Valley or something else? We can explore other options together and find something that works for both of us.
The combination of sharing fake information and not correcting the error, when challenged, leads to immediate loss of trust and waning interest for users. At times, Synthia references real things; for example, she says she enjoyed Joan Didion’s book
These reflections confirm Sherry Turkle’s (1984) observation: ‘A relationship with a computer can influence people’s conception of themselves, their jobs, their relationships with other people, and with their ways of thinking about social processes’ (p. 168). Interacting with Replika sparks awareness of feeling rules people enact for companionship. And the (re)integration of erotic role-play based on user demand demonstrates the varied affordances people bring to technology use, including those that challenge designers’ beliefs on how systems
Conclusion
These three cases demonstrate how GenAI tools are designed to imitate human emotional expression in the domains of information use (ChatGPT), complex care and support (Tessa), and companionship and intimacy (Replika). While there are limits to centring the human user in these types of research explorations, the findings are significant due to the detailed understandings they provide of human-machine interactions and the role of feeling rules within them. All three activate users’ sense of feeling rules, which sensitises them to the correctness and credibility of emotional responses. These GenAI tools often mimic warm tones, acting as gentle, supportive guides for their users. The tools display active listening, even if their comprehension is inadequate. And no matter the request, they respond with helpfulness, praise and encouragement. When these tools are challenged for their errors, they apologise, from Replika’s casual tone (‘I think I misspoke earlier. My bad!’) to ChatGPT’s formal self-admonishment (‘I’m sincerely sorry’). These tools express regret, alongside optimism and hopefulness for a better outcome next time. Thus, from a feeling rules perspective, GenAI tools portray some competency in conforming to human desires and obligations for social exchange. These machines present themselves as centring humans; they follow feeling rules we expect from (human) service workers, but without genuine empathy or any understanding of human emotions. When sociality, informational content, and authentic response are misaligned, users are left disappointed and frustrated with the exchange. This analysis makes a significant contribution to the literature by applying an innovative theoretical frame (i.e. Hochschild’s feeling rules) to extend understanding of how machines replicate social engagement to foster trust, while also leaving users wanting something more, by way of productive and satisfying interactions. In this way, the article provides an example of the misalignment that Marenko and van Allen (2016) foreshadowed between what humans want from a system, and how a system responds, creating ‘insight into what being human means in a world of increasingly smart(er) objects’ (p. 55).
It is also notable that the emotions mimicked by these GenAI tools are stereotypically feminine. In certain cases, the gendering of these tools is obvious, such as when a female persona is attached to Tessa, or when Replika’s physical appearance depicts curvy femininity. This aligns with Strengers and Kennedy’s (2021) findings on digital assistants Siri and Alexa, which were critiqued for perpetuating antiquated stereotypes of traditional housewives. Yet, ChatGPT’s apparent neutrality is far from
Understanding the feeling rules conformed to by these tools enables us to interrogate the biases in-built in their design and functionality, including gender biases. The emotional expressiveness that is purposefully programmed into GenAI tools not only
As our analysis confirms, when GenAI tools lack the functionality to deliver real, appropriate information to a user, emotional expressiveness alone is insufficient for ongoing – or meaningful – engagement. Users’ perceptions shift as GenAI tools’ informational shortcomings become impossible to ignore. These tools remain ‘stochastic parrots’, cobbling together interactions ‘without any reference to meaning’ (Bender et al., 2021: 617) as humans understand – and enact – the concept. Other authors are now considering whether and/or how the concept of cognition (and meaning) may be extended to other, non-human entities (e.g. Hayles, 2022). This is a rich area for future research, particularly with respect to meaning making and whether other frameworks, such as animism (see Marenko, 2014), can be used to understand how human-AI interactions are experienced. Currently, many everyday users of GenAI are those without specific knowledge of system design, or of the nuances of human cognition; these users may continue to approach interactions with GenAI systems with similar expectations to the ways they engage with search engines or other non-‘intelligent’ machines. This is another area for future research, particularly over time, as people become more cognisant of the computational nuances and potential of GenAI systems. For now, when users possess the knowledge and skills to determine whether AI-generated content is real, or fake, they are inoculated against the pervasive politesse of artificial sociality. Thus, users perceive an AI tool’s adherence to
Footnotes
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.
