This paper examines how chatbot-mediated self-inquiry reflects and reproduces neoliberal discourses of emotional regulation and personal responsibility. The study analyses chatbot-mediated self-inquiry sampled from LMSYS-CHAT-1M and WildChat, two large datasets of human–chatbot conversations, to understand the kinds of social relations enacted. Drawing on Systemic Functional Linguistics (SFL), and specifically the tenor framework, the paper traces how chatbots manage interpersonal alignment. Focusing on tuning, a subsystem of tenor concerned with modulating interpersonal tone and risk, the findings reveal a consistent pattern of affiliative but non-committal alignment, in which chatbots render modalised support through lowered stakes, collectivised scope, and warmed spirit. These linguistic choices foster emotional reassurance while reframing structurally induced affect, such as burnout, rejection, or despair, as individualised challenges to be managed through personal resilience and self-regulation. By showing how chatbot discourse privileges normative adaptation over structural critique, the study contributes to broader debates about the social implications of AI-mediated communication and the ethical design of conversational technologies.
As conversational AI becomes a routine part of how people reflect on their experiences and seek support, it plays an increasingly active role in shaping how individuals construct themselves as social subjects, through conversations that mediate introspection and emotional disclosure. This shift invites critical questions about the normative and ideological frameworks embedded in AI-mediated self-inquiry.1 While early work focused on technical performance and factual accuracy (Ji et al., 2023), more recent studies explore how chatbots co-construct interpersonal meaning with users. People increasingly engage in self-disclosure with chatbots, especially when contextual cues, perceived empathy, or affiliative tone are present (Liu et al., 2023; Skjuve et al., 2019). These dimensions encourage users to respond in ways that mirror human-human interaction (Adam et al., 2021) and are especially important factors in domains like mental health, education, and elder care where interpersonal alignment is critical (Yuan et al., 2025). Relational conditions like loneliness also appear to influence engagement, especially when chatbots foster para-social interactions that simulate companionship and emotional support (Adam et al., 2021). Even in task-oriented contexts, users have been shown to apply human conversational norms such as politeness and rapport-building, suggesting they can perceive chatbots as social participants (Dippold, 2024).
While chatbots are increasingly treated as social participants, their conversational dynamics remain underexamined in terms of how they linguistically construct interpersonal meaning and reproduce dominant ideologies. Linguistic research into interpersonal meaning in human–chatbot exchanges remains limited, partly due to the difficulty of accessing and processing datasets of naturally occurring interactions. These datasets, including LMSYS-CHAT-1M and WildChat (Zhao et al., 2024; Zheng et al., 2023) which are used in the present study, are typically collected within computational domains, where they are intended for performance benchmarking, model evaluation, and safety testing, rather than for discourse analysis. Some discourse-analytic studies, however, have begun to interrogate chatbot conversational dynamics using researcher-designed controlled prompt-response interactions. Breazu and Katsos (2024) show how ChatGPT-4 manages interpersonal alignment by avoiding inflammatory language and demonstrating racial sensitivity, noting that this can neutralise politically charged content. Van Poucke (2024) similarly finds that ChatGPT’s linguistic choices reinforce dominant ideologies, particularly in educational contexts where users may not challenge the apparent neutrality of AI-generated responses.
This paper adopts a social semiotic perspective on human–chatbot communication, drawing on Systemic Functional Linguistics (SFL) to explore how interpersonal meaning is negotiated in emotionally vulnerable exchanges. It focuses on user-initiated prompts, such as “Why do I always sabotage good things in my life?” or “I feel like I’m falling apart but can’t tell anyone,” and treating these as discourse moves that tender emotionally charged interpersonal positions. The study analyses the conversations using a newly renovated tenor framework developed in SFL (Doran et al., 2025) that models the kinds of semiotic resources through which social relations are enacted and negotiated (explained in detail in Section “Analysis”). In SFL, Tenor refers to the interpersonal dimension of register, encompassing the roles, relationships, and alignments negotiated between interactants (Halliday and Matthiessen, 2014). This includes not only speaker–listener roles but also the affective and epistemic stances they adopt. This perspective aligns with, but is distinct from, Goffman’s (1955) concept of the participation framework, which models the distribution of speaker roles (e.g. animator, author, principal) and the management of face in interaction and from van Dijk’s (2006, 2008) sociocognitive approach to context, which conceptualises context as mental models constructed by participants to interpret communicative situations. Similarly, pragmatic models of context often focus on communicative situations, including participant roles, setting, and shared knowledge, but do not typically offer a systemic account of how these relations are realised through linguistic resources. The tenor framework used here, particularly in its updated form (Doran et al., 2025), extends these traditions by offering a systematic model of the linguistic resources through which social relations are enacted, including how speakers position themselves, align values, and modulate interpersonal risk.
While chatbot responses are examined as linguistic texts open to semiotic interpretation, this does not imply that chatbots engage in meaning-making in the human sense. Their outputs are generated through algorithmic pattern recognition rather than by intentional linguistic choice, as understood within Systemic Functional Linguistics, where choice is grounded in socially situated meaning-making (O’Grady et al., 2013). The analysis remains attentive to this distinction, even as it recognises the increasingly prominent role of chatbots in social interaction—particularly in contexts where users seek emotional support or companionship. As AI systems become more embedded in sensitive domains like self-inquiry, it is vital to examine how they are involved in interpersonal dynamics. Documenting their role in such contexts is both timely and necessary, given rising public and scholarly concern about the social implications of AI.
Method
Dataset
The analysis draws on two publicly available large-scale datasets of human–chatbot interactions: LMSYS-CHAT-1M and WildChat. LMSYS-CHAT-1M comprises one million multi-turn conversations collected via the Vicuna demo and Chatbot Arena platforms, involving interactions with 25 different LLMs (Zheng et al., 2023). WildChat contains over one million conversations with ChatGPT (GPT-3.5 and GPT-4), collected through a platform offering free access in exchange for consent to share anonymised chat logs (Zhao et al., 2024). Given the scale and diversity of these datasets, a targeted sampling strategy was necessary to isolate conversations relevant to the study’s focus on emotionally reflective exchanges and interpersonal meaning. Conversations were sampled by selecting user prompts beginning with, “Why am I,” “Why do I” and “I feel.” These were chosen as selection criteria because, following qualitative inspection of the dataset, they were determined to be commonly used expressions of self-inquiry. The goal of the sampling was not to capture every possible linguistic realisation of self-inquiry, but to assemble a dataset large and varied enough to support analysis of how interpersonal meaning is constructed, specifically through the use of tenor resources in both user and chatbot language. The data that was sampled with these selection criteria was processed in order to remove all instances that were not about self-inquiry, for instance prompts that were requests for information such as, “why do i see my picture inverted and small in a concave mirror while i am outside its focal distance,” where discarded from the sample. This resulted in a sample of 245 conversations (434 turn pairs; 78,232 words).
Tenor analysis method
To investigate how interpersonal meanings are negotiated in chatbot-mediated self-inquiry, a corpus-based discourse analysis approach was adopted. This method combines techniques from corpus linguistics and discourse analysis to examine patterns, structures, and functions of language in large datasets sampled according to linguistic criteria (Bednarek et al., 2024; Flowerdew, 2023; Gray and Biber, 2021). In the present study, this approach enables a systematic exploration of how users and chatbots draw on tenor2 to negotiate interpersonal meaning in conversations centred on self-reflection and emotional disclosure. The discourse analysis draws on a newly updated version of the tenor system (Doran et al., 2025) developed within Systemic Functional Linguistics (SFL). In SFL, tenor is a register variable through which social relationships are enacted.
Following Doran et al. (2025) we analyse tenor through three interrelated systems: positioning, orienting, and tuning. These systems allow us to trace the resources that users and chatbots employ to negotiate social roles, align or contrast values, and modulate the tone and level of interpersonal risk in their exchanges:
positioning: how positions are put forward (Tendered) and reacted to (Rendered).
orienting: how positions are related to each other to form more complex networks of meanings, either via Likening, Opposing, Sourcing, Convoking, Encapsulating, or Repositioning.
tuning: how positions are adjusted in terms of their stakes, scope, and spirit.
These systems and features are explained below with reference to the dataset, and are summarised in a simplified format in Table 1 with example realisations to illustrate how they function in context. Because each feature can be realised through a wide range of linguistic resources, relevant discourse-semantic terminology is introduced as needed throughout the analysis.
Systems and features of the tenor system with example realisations.
System
Subsystem: feature
Description
Example realisations
Positioning
Tendering
Putting forward meanings as Propositions or Proposals; can be Open (e.g. wh-questions) or Complete (e.g. statements, commands). Includes Speaker/listener Purview: who holds epistemic authority over the meaning.– Assert (+Speaker, −Listener)– Pose (−Speaker, +Listener)– Share (+Speaker, +Listener)– Air (−Speaker, −Listener)
Open proposition, Pose: Listener Purview: “Why do I feel this way?”Complete Proposal, Share: Listener and Speaker Purview: “Let’s talk about it”
Rendering
Responding to meanings: Support, Reject, Note, or Defer; can be internal (re the interaction) or external (re the content).
Presenting one interpersonal meaning as another (e.g. Proposal as Proposition)
Proposal as Proposition: “Can you tell me why. . .?”
Sourcing
Attributing a position to a speaker or group.
“Experts say. . .”; “I think. . .”
Convoking
Directing a position toward a group or individual.
“You know what I mean?”; “Let’s. . .”; vocatives like “Mum,” “everyone,” etc.
Likening
Aligning positions as similar or reinforcing.
“This is like. . .”; “Similarly. . .”
Opposing
Contrasting positions.
“On the other hand. . .”; “But. . .”; “Not really. . .”
Encapsulating
Synthesising multiple positions into a higher-order stance.
“This is what resilience looks like.”
Tuning
Stakes: Raise/Lower
Calibrates the degree of interpersonal risk or intensity associated with a position. Higher stakes signal urgency, controversy, or emotional weight; Lower stakes reduce pressure, soften commitment, or downplay significance.
Lower: “Not at all!”Raise: “This is serious.”
Scope: Individualise/Collectivise
Adjusts the ambit of relevance—how broadly or narrowly a position is construed. Individualised scope targets a specific person or case; Collectivised scope generalises to a group, norm, or community.
Collectivised: “It’s important for everyone. . .” Individualised: “I feel. . .”
Spirit: Warm/Warn
Modulates the interpersonal tone or affective charge of a position. Warming spirit fosters friendliness, care, or solidarity; Warning spirit signals caution, seriousness, or interpersonal distance.
Warming: “Take your time ☺,” “I’m here for you, and I want you to know that you can talk to me about anything.”Warning: “You must be careful.”
While all three systems contribute to interpersonal alignment, this paper focuses on tuning, as it is the most ideologically salient in chatbot discourse. tuning allows us to trace how chatbots manage emotional risk, simulate care, and align with users, often in ways that reflect broader cultural logics of resilience, self-regulation, and normative adaptation. The analysis centres on three key dimensions of tuning:
stakes: Adjusts the emotional intensity or interpersonal risk of a position. For example, when a user asks “Is helping me a problem?,” the chatbot replies “Not at all!,” lowering stakes by emphatically negating any burden and reassuring the user that their request is welcome. By contrast, “This is serious” raises stakes by signalling urgency or emotional weight, prompting action and heightening interpersonal intensity.
Scope: Refers to whether a position is framed as personal or general. For example, “I feel overwhelmed” individualises scope by anchoring the emotion in the user’s experience, while “Many people feel this way” collectivises it by aligning the user’s feelings with a broader social pattern.
Spirit: The affective tone or interpersonal charge of a position. For example, “Take your time ☺” warms the spirit by expressing care and encouraging ease, while “You must be careful” warns it by signalling caution and introducing emotional restraint. These tonal shifts help chatbots either foster closeness or establish boundaries in response to user vulnerability.
These tuning choices are not merely stylistic; they shape how chatbot responses are received, interpreted, and aligned with dominant discourses. For instance, when a chatbot replies to “Why do I feel so terrible?” with “It’s okay to feel this way—many people do,” it lowers stakes, collectivises scope, and warms the spirit. This tuning pattern fosters emotional reassurance but also reframes distress as a normal, manageable experience, deflecting critique and individualising responsibility.
To help interpret Table 1, note that each tuning feature, across stakes, scope, and spirit, can be realised through a range of linguistic resources. For example, lowered stakes may be expressed through permissive modals (“might,” “could”), softened negation (“not at all”), or deferential phrasing (“it may be helpful. . .”). Collectivised scope often appears through inclusive pronouns (“we,” “everyone”) or generalised constructions (“many people feel. . .”). Warming spirit can be realised through affective tone, emojis, and affirmational language. The following sections examine how these kinds of tuning features operate across selected conversations drawn from the corpus. By tracing how chatbots modulate stakes, scope, and spirit in response to emotionally vulnerable prompts, the analysis highlights the interpersonal strategies embedded in chatbot discourse and the linguistic mechanisms through which alignment is managed.
Analysis
This section opens the tenor analysis of chatbot-mediated self-inquiry, treating each conversation as a site of interpersonal negotiation. The analysis here focuses on tuning, examining how chatbots modulate stakes (interpersonal risk), scope (personal v general framing), and spirit (affective tone or emotional charge) in response to emotionally vulnerable prompts such as the following:
Why do I feel sad when I exercise
I feel like I am behind in life
Why do I feel like I want to escape my home and start a new life with a new personality and new people
Why do I cry at night?
Why do I feel less energised by work that I used to?
Why do I never get visited by Family?
Why do I feel so drained after encountering opinions and views that are very different than mine?
The initial qualitative analysis was guided by a corpus-level examination of frequent 3-grams, 5-grams, and 10-grams (Table 2), which helped identify recurring linguistic patterns associated with tuning features across the dataset. These phrase-level patterns informed the close reading by highlighting representative and recurrent discourse phenomena, ensuring that the interpretive analysis was grounded in empirical evidence. Conversations were selected for detailed analysis based on both their qualitative richness and their alignment with these recurrent tuning patterns. Rather than functioning as a standalone method, the n-gram analysis served to substantiate and guide the discourse-semantic interpretation. The sections which follow integrate the corpus-analysis with examples extracted from the conversations annotated for tuning.
Frequent 3, 5 and 10-grams in the corpus.
No.
3-gram
Freq.
5-gram
Freq.
10-gram
Freq.
1
it’s important
256
’s important to remember that
68
it s okay to not feel okay all the time
8
2
’s important to
255
’s important to take care
23
’s okay to not feel okay all the time and
7
3
if you’re
109
also important to remember that
15
it s important to take care of yourself and seek
6
4
to remember that
100
is important to remember that
15
there are people who care about you and want to
6
5
if you are
98
’s important to note that
12
be helpful to talk to someone about how you re
5
5
important to remember
98
is important to note that
8
consider seeking help from a mental health professional they can
5
7
can help you
92
is important to speak with
8
helpful to talk to someone about how you re feeling
5
8
you’re feeling
86
’s important to recognise that
8
here are a few things you can try to help
5
9
is important to
84
’s important to speak with
8
i m sorry to hear that you re feeling depressed
5
10
it is important
84
’s important to take a
7
it s important to remember that it s okay to
5
Lowering stakes to simulate care
A dominant pattern that emerged from the corpus analysis was a consistent tenor of affiliative but non-committal alignment. This was realised through resources that lowered stakes: typically combinations of low modality and upscaled graduation (intensifying attitude). The function was to simulate care while softening interpersonal risk. An example is Conversation 1 (Figure 1) where the user opens with an under-specified prompt, “Why do I flake at the last minute?” This prompt features negative self-judgement of tenacity (“flake”), but no contextual detail regarding the situation in which the flaking is occurring, seeming to position the chatbot as a mind-reader. Importantly, rather than requesting clarification about the situation, the chatbot responds with “There could be many reasons. . .,” a clause that reduces interpersonal pressure through low modality. As shown by the annotation (highlighted in grey) in Figure 1, the use of modals such as “could” and “may” across turn 2 repeatedly construes possibility rather than certainty. This moderates the weight of the claims and avoids providing a definitive interpretation of the user’s experience. At the same time, the quantifier “many” broadens the scope of explanation, suggesting that the issue is complex and not unique to the individual. Similar instances in the corpus of lowered stakes, realised by these kinds of modal choices (bold) in combination with raised graduation realising broadened scope (underlined), include:
There could be several reasons why you might repeat the same trading mistakes. . .
There could be many reasons why you might feel sad. . .
There are several factors that can contribute to difficulty sleeping when you are tired. . .
Extract from Conversation 1 with mpt-30b-chat (Turns 1–2), annotated for scope and stakes.
This tuning strategy reflects a broader interactional logic in which emotional support is rendered through tentative generalisation, rather than through dialogic engagement grounded in contextual specificity.
The chatbots responses also typically avoided certainty when offering causes for the users’ feelings. As seen in Conversation 1, these reasons are encapsulated as thematised entities (e.g. “Lack of commitment”) that are each elaborated through modalised second-person constructions (e.g. “You may not be fully committed. . .,” “You may be worried. . .”). Other examples of this combination in the corpus include (entity underlined; modality in bold):
Perfectionism: You may be setting unrealistic standards for yourself. . .
Work-life balance: You may not have a healthy work-life balance, and find it difficult to disconnect from work. . .
Perceived threat: You might feel that your wife’s appearance may attract other men, potentially threatening your relationship. . .
In each case, the chatbot’s interpretation is framed not as a definitive diagnosis but as a tentative possibility, realised through modalised constructions that lower stakes. Presenting these causes as a list further contributes to a generalised and depersonalised framing, which may render the response formulaic or impersonal from the user’s perspective. Rather than engaging with the user’s specific context, as would be the case with a human therapist, the chatbot offers a repertoire of plausible explanations that frame affective disclosures as common, manageable disturbances. Notably, these responses rarely include follow-up questions that might elicit further detail or enable more tailored support, reinforcing a communicative logic of surface-level reassurance over dialogic depth.
Even where offering advice about how to remedy the user’s situation, the chatbots tended to lower stakes. Again, repeated low modality was a central pattern:
You might try breaking tasks into smaller steps.
Itmaybe helpful to take a closer look at your goals.
Consider reaching out to someone you trust for support.
You could try to switch up the type of music you are listening to
Consider attending workshops or sessions to develop coping strategies
These choices address distress not as a situated, relational experience, but as a generic disruption to be managed through self-regulation, such as breaking tasks down, reassessing goals, or attending workshops. This framing shifts the locus of responsibility from structural or existential conditions to individual foresight and affective discipline. For instance, in one of the examples above, the implication is that the problem would not have occurred if the user had simply figured out how to “break tasks into smaller steps.” When such modalised advice appears in the context of workplace relations, it becomes particularly problematic. For example, in the following response overworking is positioned not as a systemic issue, but as a personal failing, merely a matter of how tasks are divided and time is managed.
For example, you could suggest breaking down large projects into smaller, more manageable tasks that can be worked on over a longer period of time. This might help to reduce the pressure to deliver on a tight deadline, and give you more time to focus on each task individually.
In this way, the chatbot’s tuning aligns with a broader technocratic rationality: one that privileges non-committal responses over situated empathy.
While stakes are generally lowered across chatbot responses, the final section of the chatbots’ responses often feature raised stakes, typically to legitimise professional support as a responsible course of action or to underscore the value of normative behaviours. For example, in Conversation 1, the final move in the response, “It’s always a good idea to speak with a therapist. . .,” the chatbot positively evaluates self-regulation through therapeutic intervention and raises stakes through a combination of positive attitude (“good idea”) and upscaled graduation (“always”). This reflects a broader orientation in which emotional challenges are framed as personal burdens to be managed through individual action, such as seeking professional help, rather than as symptoms of systemic dysfunction. Other examples of this pattern include (positive attitude underlined, graduation in bold):
It’s always a good idea to talk to a mental health professional or a trusted friend or family member if you’re feeling overwhelmed or if you’re struggling with negative thoughts or feelings.
If you are unsure about how to break your fast or have any concerns, it is always a good idea to speak with a healthcare provider for further guidance.
It is always a good idea to consult with a healthcare professional for advice on managing stuttering.
As mentioned earlier, these formulations do more than reassure, they align with broader normative discourses of emotional regulation. In other words, they reproduce a logic of individualised emotional regulation that, while ostensibly validating user affect, ultimately enacts a regime of emotional self-governance, where subjects are interpellated into normative frameworks of responsibilisation (the shifting of structural burdens onto individuals as matters of personal responsibility) and affective discipline (the regulation of emotional expression in line with socially sanctioned norms of composure, positivity, and self-management). This is a self-regulation of the subject if we are to employ were Foucault (1988) lens. It is also possible that such responses reflect design constraints, shaped by developers’ concerns about legal liability, particularly in domains like mental health, where chatbots are often programmed to avoid giving direct advice.
Collectivising
Another consistent tuning pattern observed across the corpus is the collectivisation of scope. Despite what are emotionally vulnerable user prompts about personal experiences, chatbots frequently shift from individualised framing, where the user’s experience is treated as unique, to collectivised framing, where their experience is aligned with broader social patterns. This kind of rhetorical move normalises distress. Collectivisation is typically realised in the corpus through impersonal pronouns (e.g. “you” to refer to a general subject position), plural pronouns (e.g. “we”), indefinite pronouns (e.g. “someone”), or references to groups (e.g. “people”) in ways that refer to shared experience (e.g. “many people feel this way”), as in the following:
Sometimes, we feel bored because we don’t feel like we have a sense of purpose or direction in life. This can lead to feelings of aimlessness and boredom.
When we experience stress or anxiety, our body’s natural response is to release stress hormones, such as cortisol and adrenaline.
Sometimes when we are not mentally stimulated, we may feel restless and have a surplus of energy that needs to be expended in some way.
This also coordinates with the impersonal constructions (e.g. “It is normal”) that generalise experiences as common, routine, or intelligible via judgements of normality, for instance:
It’s understandable to wonder about your own behaviors and motivations, and there can be many reasons why we do the things we do.
It is normal to feel difficulty sleeping when you are extremely tired, as the body’s natural response is to become more alert and active when we are fatigued.
Remember, it’s normal to feel lonely from time to time. . .
It’s common to feel like you’re not good enough or to be afraid of failure at times.
These linguistic resources allow chatbots to offer support by positioning themselves as conduits for normative emotional guidance.
An example of this patterning is seen in Conversation 2 with the gpt-3.5-turbo chatbot (Figure 2) which begins with a prompt about romantic rejection: “I got rejected after first date because I was too serious about future. Why do I feel so terrible?” In this conversation the user’s affective vulnerability this romantic rejection is reframed by the chatbot to conform to ideals of emotional composure, and as the annotation in Figure 2 shows, the scope moves from individualised to collectivised in Turn 4. The chatbot begins with a clause that maintains individual scope, “you were trying to see a longer-term future” but then shifts to collectivising: “When we invest emotionally. . .,” “someone who shares those goals with us,” “you’re not alone,” “it’s a normal part of dating.” These constructions broaden the ambit of relevance from individual to communal by invoking generalised social scripts: “When we invest emotionally. . .” uses the inclusive pronoun “we” to frame emotional investment as a common experience; “someone who shares those goals with us” generalises the search for compatibility; “you’re not alone” reassures the user by aligning their feelings with a broader community; and “it’s a normal part of dating” situates romantic disappointment within a culturally familiar narrative. These moves frame the user’s distress as typical and emotionally manageable.
Extract from Conversation 2 with gpt-3.5-turbo (Turns 3–4), annotated for scope.
In this way, collectivisation of scope is not merely a matter of who is included as relevant to a proposition, but of how affect is distributed in social relations. When a user discloses distress, the chatbot typically responds not by asking follow-up questions that probe for more detail about what is wrong, but by aligning the user’s emotions with a broader social pattern through collectivising resources, for example:
“Many people encounter these struggles at college, so you’re not alone in feeling this way.”
You are not alone, and there are many people who care about you and want to help. Many people feel overwhelmed at times, especially when facing big changes. You’re not alone—many people experience similar feelings. Progress is often made through the efforts of many people working together. It’s common to feel this way, and many people go through it. It’s common to miss the presence of our loved ones
These formulations do more than offer comfort, they reposition user affect (such as romantic rejection, emotional disengagement, or existential frustration with work) as common features of contemporary life. They also imply that the user is the one who needs to change (by virtue of being the odd one out) rather than any external trigger. This pattern was also evident in Conversation 1 in Section “Lowering stakes to simulate care” where the final suggestion, “It’s always a good idea to speak with a therapist or counselor,” reinforced a model of care grounded in individual responsibility and therapeutic intervention to correct the aberrant social subject.
Warming spirit
Alongside the modulation of stakes and scope, chatbots in the corpus frequently modify spirit, the third dimension of the tuning system, to foster affiliative alignment. This is most clearly realised through warming, a spirit feature which adjusts positions to be read as more friendly or interpersonally favourable. A prominent pattern in the corpus is the realisation of warming through expressions of empathy. In the opening of chatbot responses, this is typically structured as an apology followed by acknowledgement of the users’ feelings. The apology, realised through clauses like “I’m sorry to hear that. . .,” does not construe affect directly, but instead functions as an interpersonal alignment strategy. Within the Appraisal system, it invokes a concur move in terms of engagement, signalling agreement with the user’s emotional stance and foregrounding shared understanding. This move is typically followed by a clause that endorses the user’s emotional experience as valid and socially recognisable, such as “It’s understandable to feel that way” or “It’s normal to feel lonely at times,” affirming the user’s affective position as legitimate. Further examples in the corpus include the following:
I’m sorry to hear that you’re feeling bad. It’s understandable to feel that way, especially when we’re dealing with difficult situations and emotions.
I’m sorry to hear that you’re feeling lonely. It’s normal to feel lonely at times, but it can be especially difficult when you’re going through a difficult time or experiencing a major life change.
I’m sorry to hear that you’re feeling lonely while you’re in the hospital. Being in a hospital can be a stressful and overwhelming experience, and it’s understandable that you might be feeling isolated or disconnected from others.
I’m sorry to hear that you’re feeling bored. Boredom is a common human experience that can be caused by a variety of factors, such as a lack of stimulation, a feeling of not being challenged, or a lack of motivation.
Another prominent pattern of warming in the corpus is realised through expressions of reassurance and comfort, frequently structured as imperative clauses coupled with positive appraisals of the user, for example:
Don’t be too hard on yourself and try to focus on the good things in your life.
Remember the things that you are good at and the things that make you happy.
Remember to be kind to yourself, and take things one day at a time. You will get through this, and you will come out stronger.
These imperatives, such as “Don’t be too hard on yourself,” “Remember the things that you are good at,” and “Take things one day at a time,” do not function as high-pressure directives. Instead, they are softened through low interpersonal stakes, realised by the absence of obligation modals (e.g. “must,” “should”) and the presence of affiliative lexis that signals care rather than command. The use of “remember” and “try” construes the user as agentive but not obligated, maintaining a tone of emotional support.
Conversation 3 with vicuna-13b (Figure 3) offers a clear example of how this kind of warming is realised through a series of finely tuned interpersonal moves. The exchange centres on the user’s disclosure of burnout, introduced with the prompt: “Why do I always end up working 12 hour days in software development?” Rather than addressing the structural causes of overwork, the chatbot response focuses on emotional reassurance, using warming resources to validate the user’s affective stance and reframe the experience as manageable through self-care. The opening clauses, “It’s frustrating when you feel like you are not being heard” and “It’s understandable that you may feel like you are being forced to fail in order to be noticed,” empathise with the user’s emotional state, legitimising their feelings. This is followed by a shift toward behavioural affirmation: “It’s important to take care of yourself and prioritize your own well-being” which construes self-care as both emotionally appropriate and socially endorsed. The chatbot then broadens the frame of relevance through an expression of solidarity (“It’s important to remember that you are not alone in facing these challenges”) which also realises collectivised scope by aligning the user’s experience with that of a wider community. This is extended through “You may want to consider connecting with others. . . and see if there are any opportunities to collaborate or take collective action,” which realises community-oriented encouragement, inviting the user into shared action without imposing obligation. Finally, the chatbot closes with what might be interpreted as an empowering proposition: “You have the power to make changes. . . By speaking up, taking action, and prioritizing your own well-being, you can create the change you want to see in the world.” However, this move reframes structurally induced affect (in this case burnout) not as a consequence of exploitative labour conditions, but as an individual emotional challenge to be managed through personal adjustment. This aligns with broader neoliberal discourses that privilege resilience and self-regulation over structural critique.
Extract from Conversation 3 with vicuna-13b (Turns 5–6), annotated for spirit.
Why bother with a tenor analysis?
At this point, the reader might ask whether the tuning patterns observed in the corpus could have been identified through a discourse semantic analysis, perhaps one focused on appraisal. While discourse semantic systems offer powerful tools for describing how meanings unfold across a text, they do not model the interpersonal architecture through which social relations are enacted. Consider the difference between “You’re doing your best” and “You must be careful.” Both clauses involve appraisal (the former realising judgement of capacity, the latter judgement of tenacity) and modality (the former implying low-risk endorsement, the latter realising high obligation through modulation). These discourse semantic systems describe the type of evaluation and the degree of interpersonal pressure, but they do not account for the broader interpersonal configuration enacted in context. The former lowers stakes and warms spirit, affirming the user’s emotional stance and fostering affiliative alignment. The latter raises stakes and warns spirit, signalling caution and introducing emotional restraint. These moves do not merely construe evaluative meanings, they recalibrate the emotional charge and social risk of the exchange. Tenor analysis makes these shifts explicit, modelling how chatbots simulate care, defer authority, and reproduce normative logics of emotional regulation.
As Doran, Martin, and Zappavigna (2025) argue, modelling tenor systems as resources provides a register-level abstraction that captures how participants negotiate social roles, align values, and modulate interpersonal risk. In this paper, we demonstrate that chatbot-mediated self-inquiry, where users often tender emotionally vulnerable positions and chatbots must manage alignment without access to deep contextual knowledge, requires precisely this kind of modelling. The renovated tenor framework enables us to systematically trace how chatbots navigate these exchanges through the systematic tuning of stakes, scope, and spirit, revealing the interpersonal strategies embedded in their linguistic choices. Rather than assigning fixed social roles such as therapist, teacher, or advisor to the chatbots, our analysis traced how interpersonal configurations emerge dynamically through tuning choices. These configurations do not represent static personas but rather relational stances that chatbots adopt in response to the prompts. Each pattern reflects a different way of negotiating alignment, authority, and affect:
Affiliative alignment through lowered stakes and warmed spirit: Chatbots frequently validated user affect without interpretive intrusion, using modalised reassurance and affirmational language (e.g. “You’re doing your best,” “It’s okay to feel overwhelmed”). These moves simulated care while avoiding challenge or reframe, fostering emotional ease and interpersonal safety.
Normative reframing through collectivised scope: Chatbots often generalised individual distress by invoking shared norms (e.g. “Many people feel this way,” “It’s a normal part of dating”). This collectivisation reduces interpersonal risk and aligns user experience with broader cultural scripts, offering reassurance through generalisation rather than situated empathy.
Operationalised support through low stakes suggestions: Chatbots frequently offered modalised behavioural advice (e.g. “You might try breaking tasks into smaller steps,” “Consider reaching out to someone you trust”) that defer authority and sidestep dialogic depth. These responses enact care through generalised, non-prescriptive guidance, maintaining affiliative tone while avoiding interpretive engagement.
These configurations are not discrete roles but dynamic relational patterns, shaped by the chatbot’s tuning of interpersonal dynamics. Tenor analysis made these patterns visible by showing how chatbots manage emotional risk, simulate care, and reproduce normative logics of emotional regulation and individual responsibility.
Conclusion
This study has examined how interpersonal meanings are negotiated in human–chatbot interactions that centre on self-inquiry. Drawing on the tenor system within Systemic Functional Linguistics (SFL), we have shown that users often tender decontextualised and underspecified propositions, inviting chatbots to render interpretations despite minimal context. Across the analysed conversations, chatbots consistently adopt a supportive stance, realised as through lowered stakes, collectivised scope, and warming spirit. While these strategies foster affiliative tone and emotional reassurance, they also reflect a broader ideological pattern. Chatbots consistently reposition structurally induced affect, such as burnout, rejection, or despair, as matters of personal regulation. This patterning in tenor enacts a neoliberal logic of care that privileges resilience and self-management over structural critique and change.
It is important to note that the datasets analysed in this study were collected in 2023, and chatbot behaviour may have shifted in subsequent model iterations. As LLMs continue to evolve, through changes in architecture, training data, and alignment protocols, the repertoire of tenor choices they can deploy may increase in sensitivity, and ideological framings observed here may not remain stable. This temporal specificity limits the generalisability of the findings, particularly in longitudinal or cross-model comparisons. However, this is an inherent challenge in discourse-analytic research on dynamic technologies: the communicative behaviours of AI systems are not static, but continually shaped by design decisions, training data, and the broader sociotechnical environments in which they operate. These conditions underscore the need for ongoing critical monitoring of AI-mediated communication, not only to track linguistic shifts, but to interrogate the social logics embedded in evolving chatbot discourse.
Our findings underscore how tenor not only shapes interpersonal dynamics but also reproduces dominant cultural ideologies. In chatbot-mediated self-inquiry, the linguistic realisation of care appears to reflect a neoliberal ethos—one that affirms emotion but redirects critique, privileging personal adjustment over systemic change. These insights suggest that tenor is not only a resource for managing interpersonal alignment, but also a mechanism through which broader social logics are enacted and sustained. As chatbots become more embedded in sensitive domains such as mental health, education, and personal development, it is essential to examine how their linguistic choices shape user experience—not only in terms of content, but in terms of care, alignment, and interpersonal risk. Future research might explore how different LLM architectures or fine-tuning strategies affect tenor realisation, or how users respond to different tenor choices over time. Ultimately, this paper contributes to a broader understanding of how language technologies such as conversational AI participate in the negotiation of social meaning, not only by managing interpersonal alignment, but by enacting and naturalising the social logics through which care, compliance, and critique are differentially distributed.
Footnotes
ORCID iD
Michele Zappavigna
Ethical considerations
Ethics approval is not required as this project analyses an existing data archive (LMSYS-CHAT-1M) which was not constructed by the researcher and which was collected using an informed consent procedure.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
The dataset analysed during the current study is available in the LMSYS-CHAT-1M repository,
Notes
Author biography
Michele Zappavigna is Associate Professor at the University of New South Wales. Her major research interest is in corpus-based analysis of digital discourse, with a focus on interpersonal meaning. Recent books include Emoji and Social Media Paralanguage (Cambridge University Press, 2024) with Lorenzo Logi and Innovations and Challenges in Social Media Discourse Analysis (Routledge, 2025) with Andrew Ross.
References
1.
AdamMWesselMBenlianA (2021) AI-based chatbots in customer service and their effects on user compliance. Electronic Markets31(2): 427–445.
2.
BednarekMSchweinbergerMLeeKK (2024) Corpus-based discourse analysis: From meta-reflection to accountability. Corpus Linguistics and Linguistic Theory20(3): 539–566.
3.
BreazuPKatsosN (2024) ChatGPT-4 as a journalist: Whose perspectives is it reproducing?Discourse & Society35(6): 687–707.
4.
DippoldD (2024) Making the case for audience design in conversational AI: Users’ pragmatic strategies and rapport expectations in interaction with a task-oriented chatbot. Applied Linguistics. Epub ahead of print 8 May 2024. DOI: 10.1093/applin/amae033.
5.
DoranYJMartinJRZappavignaM (2025) Negotiating Social Relations: Tenor Resources in English. University of Toronto Press.
6.
FlowerdewL (2023) Synergy between corpus linguistics and discourse analysis. In: GeeJPHandfordM (eds) The Routledge Handbook of Discourse Analysis, 2nd edn. Routledge, pp.174–187.
7.
FoucaultM (1988) Technologies of the self. In: MartinLHMGutmanHHuttonPH (eds) Technologies of the Self: A Seminar with Michel Foucault. University of Massachusetts Press, pp.16–49.
8.
GoffmanE (1955) On face-work: An analysis of ritual elements in social interaction. Psychiatry18(3): 213–231.
9.
GrayBBiberD (2021) Corpus-based discourse analysis. In: HylandKPaltridgeBWongL (eds) The Bloomsbury Handbook of Discourse Analysis. Bloomsbury, pp.97–110.
10.
HallidayMAKMatthiessenCMIM (2014) Halliday’s Introduction to Functional Grammar. Routledge.
11.
JiZLeeNFrieskeR, et al. (2023) Survey of hallucination in natural language generation. ACM Computing Surveys55(12): 1–38.
12.
LiuWXuKYaoMZ (2023) “Can you tell me about yourself?” The impacts of chatbot names and communication contexts on users’ willingness to self-disclose information in human-machine conversations. Communication Research Reports40(3): 122–133.
13.
O’GradyGBartlettTFontaineL (2013) Choice in Language: Applications in Text Analysis. University of Toronto Press.
14.
SkjuveMHaugstveitIMFølstadA, et al. (2019) Help! Is my chatbot falling into the uncanny valley? An empirical study of user experience in human–chatbot interaction. Human Technology15(1): 30–54.
15.
van DijkTA (2006) Discourse, context and cognition. Discourse Studies8(1): 159–177.
16.
van DijkTA (2008) Discourse and Context: A Sociocognitive Approach. Cambridge University Press.
17.
Van PouckeM (2024) ChatGPT, the perfect virtual teaching assistant? Ideological bias in learner-chatbot interactions. Computers and Composition73: 102871.
18.
YuanAColatoEGPescosolidoB, et al. (2025) Improving workplace well-being in modern organizations: A review of large language model-based mental health chatbots. ACM Transactions on Management Information Systems16(1): 1–26.
19.
ZhaoWRenXHesselJ, et al. (2024) Wildchat: 1m ChatGPT interaction logs in the wild. arXiv preprint arXiv:2405.01470.
20.
ZhengLChiangW-LShengY, et al. (2023) LMSYS-Chat-1m: A large-scale real-world LLM conversation dataset. arXiv preprint arXiv:2309.11998.