Abstract
Background
Artificial intelligence (AI) is said to be “transforming mental health”. AI-based technologies and technique are now considered to have uses in almost every domain of mental health care: including decision-making, assessment and healthcare management. What remains underexplored is whether/how mental health
Method
Taking conversational agents as our point of departure, we explore the ways official online materials explain and make sense of chatbots, their imagined functionality and value for (potential) users. We focus on three chatbots for mental health: Woebot, Wysa and Tess.
Findings
“Recovery” is largely missing as an overt focus across materials. However, analysis does reveal themes that speak to the struggles over practice, expertise and evidence that the concept of recovery articulates. We discuss these under the headings “troubled clinical responsibility”, “extended virtue of (technological) self-care” and “altered ontologies and psychopathologies of time”.
Conclusions
Ultimately, we argue that alongside more traditional forms of recovery, chatbots may be shaped by, and shaping, an increasingly individualised form of a “personal recovery imperative”.
Introduction
“Recovery” has been iterated as the new “paradigm in mental health policy and practice.” 1 In “No health without mental health: A cross-government mental health outcomes strategy for people of all ages’ 2 the coalition government set six mental health objectives, one of which was that more people with mental health problems will recover. This recovery-guided approach is continued in “Closing the gap: Priorities for essential change in mental health”, 3 where a commitment is made to commission services with an emphasis on recovery. More recently the “Five year forward view for Mental Health’ 4 document was published that stresses that commissioners should prioritise early intervention, choice/personalisation and recovery.
Yet, despite its increased visibility and importance within the mental health arena, recovery remains a vague, “polyvalent concept’ 5 that defies definitional consensus. Within the clinical literature, recovery is often situated as the amelioration of symptoms so that a person can resume activities within what is considered a normal range. A second form of recovery has its origins in the Independent Living and Civil Rights Movement of the 1960s and 1970s. This sense of recovery “does not require remission of symptoms or other deficits, nor does it constitute a return to normal functioning. Rather, it views mental illness as only one aspect of an otherwise whole person”.6, 7 Whilst the multiple meanings of recovery are often situated as dualisms or either/ors, Pilgrim and McCranie 8 suggests that there are four “different shades” of the meanings of recovery: (i) recovery as a personal journey; (ii) recovery as a critique of services that emphasises choice, empowerment, reform; (iii) recovery as therapeutic optimism; and (iv) recovery and the social model of disability. Whilst there is a conceptual separation in their emergence from different social groups there is a practical co-presence in their everyday application that can create “working misunderstandings”. 8
Recovery is also (bio)political. Firstly, dominant discourses of recovery are said to individualise what are social problems. 9 There has been a neoliberal intrusion on the word and it has been taken over by marketisation, language, techniques and outcomes. 10 Within this neoliberal, individualising arena recovery has become the “struggle for recognition” and “it is only when the collective, structural experiences of inequality and injustice are explicitly linked to processes of emotional distress that recovery will be possible.” 9
Secondly, “definitional debates about recovery reflect wider ideological debates about the nature of mental health.”
9
Policy documents talk about recovery within the context of the “global burden” of mental illness7,11 whilst often sidestepping controversy surrounding diagnosis in psychiatry. The
In short, recovery remains a vague and political concept that sits at the core of current mental health policy despite significant critical voices. In this context, it is timely to explore whether, and in what forms, recovery is situated within conversational agents (chatbots) - an emerging technology viewed as having potential to transform mental health treatment. The Topol Review on the digital future of mental healthcare and its workforce concluded that there is already a strong evidence base for these kinds of interventions and that by 2021 chatbot systems could be offering advanced automated or semi-automated diagnostic and therapeutic tools. 14 However, whilst considerable research has been carried out on the efficacy, feasibility and ease of use of mental health chatbots, 15 very little attention is being paid to how chatbots relate to ideas of mental health recovery. We do not know, for example, the extent to which chatbots are designed to work with – and encourage – particular versions of recovery. The paper therefore asks: to what extent are chatbots shaped by, and shaping, the concept of mental health recovery?
We begin with a critical review of the existing literature on chatbots and mental health. Exploring how chatbots shape, and are shaped by, the concept of mental health recovery requires us to look both at overt reference to recovery as well as possible sites of socio-political struggles that may come to conceptualize recovery. 1 To operationalise this, we derive analytic directives by bringing contradiction, contention and debates within the literature on chatbots into dialogue with the existing literature on the meanings of mental health recovery. We then move to outline the methodological and analytical focus. We interrogate a range of online sources to explore how chatbots are being positioned and the claims being made of them.16–18 Analysis starts with the foreshadowed ideas from the critical literature on recovery and asks: how do chatbots relate to recovery? How are chatbots positioned in terms of expertise? How do chatbots relate to time? How do chatbots relate to everyday experience and clinical subjectivities? We conclude with a discussion of how recovery may be being troubled by chatbots and the implications for the future of mental health recovery.
Chatbots and mental health recovery: A view from the literature
Chatbots - “systems that are able to converse and interact with human users using spoken, written, and visual languages”
19
– are increasingly being used in the field of mental health. A scoping review conducted by Abd-alrazaq et al. (2020) identified 53 studies that involved 41 different chatbots. In 17 studies, chatbots were used for therapeutic purposes and of these 10 were based on cognitive behavioural therapy and just under half were focused on people with depression and anxiety.
19
Recovery has an ambiguous presence within this literature. Whilst there is minimal overt mention of recovery, there is a clear focus on efficacy and symptom reduction with Fulmer et al., 22 for example, examining scores on the Patient Health Questionnaire (PHQ-9), Generalized Anxiety Disorder Scale (GAD-7), and Positive and Negative Affect Scale (PANAS) at baseline and 2 to 4 weeks later. Other studies investigate differences in effect index across groups and time, 23 or report on the development of a chatbot to deliver support for caregiving professionals; also, as measured by the PHQ-9 and GAD-7. 24 Recovery, then, appears to be tacitly situated as a (clinical) outcome and the amelioration of symptoms.
With that said, there are areas of contradiction, contention and debate within the literature that point towards emerging social, political and technological tensions around the conceptualization of recovery. Several of these are made apparent in the following clinical abstract:
Struggles over expertise manifest in the literature where conversational agents are situated as offering low-cost, on-demand, highly adaptable, self-help which will help address a growing global challenge. However, at one and the same time, it is made clear that conversational agents should supplement rather than replace traditional therapeutic options. In many respects this is a variant on an old theme. As telepsychiatry developed it was suggested that this could increase access. 17 However, clinicians were also presented as being resistant to its advancement because of the deleterious effect on the doctor-patient relationship deemed central to therapy. For example, and as Pickersgill notes, 17 in his 1952 address as President of the American Psychiatric Association, Leo Bartemeier expressed the need to be cautious about the rise of technological approaches to clinical practice and alert to the possible impact on clinical and therapeutic interpersonal relationships. The dual emphasis on reach and tradition creates complex biomedical virtues – which “refers to the (profession‐defined) praxis of goodness within the laboratory and the clinic” 17 – where telepsychiatry and direct clinical care need to be carefully situated as extending clinical care rather than replacing it. Pickersgill 17 uses the example of Chill Panda, an app that captures biometric data – such as heart rate and blood flow – and suggests playful tasks that suit your current state of mind. Users are encouraged to learn how to manage their stress and feel better. However, this is offered with provisos and “clinical expertise thus remains salient, even as (potential) patients are encouraged to contribute to their care”. 17
Within the literature on chatbots, these tensions between tradition and reach intersect with ambiguities surrounding artificial intelligence. Vaidyam et al. 15 define chatbots as “digital tools existing either as hardware (such as an Amazon Echo running the Alexa digital assistant software) or software (such as Google Assistant running on Android devices or Siri running on Apple devices) that use machine learning and artificial intelligence methods to mimic humanlike behaviours and provide a task-oriented framework with evolving dialogue able to participate in conversation”. Seen in this way, AI becomes an essential part of chatbots and conversational agents are situated as part of the general AI revolution said to be “transforming mental health”. 28 The Alan Turing Institute is also attempting to drive forward research in to AI-based precision mental health. There is also a move towards a language of “prediction”. A 2018 paper 30 for example, suggested that AI could be used to identify how well people at risk of psychosis will function in the future. The news forum “Scimex” reported this under the banner – “AI could be used to predict mental health recovery”. Yet, the majority of the chatbots identified in a review (92.5%) depended on decision trees or predefined rules to generate their responses. Only 7.5% used machine learning approaches. 19 The nature of artificial intelligence within the chatbots remains ambiguous and it is therefore unclear whether struggles over expertise involve users gaining more control and autonomy, or the chatbot itself becoming the dominant agent: further individualising “treatment” and ultimately alienating both service users and clinicians from it.
There are also temporal ambiguities surrounding the ways in which AI is discussed; including a mix up of tenses between what AI could do and what AI actually does presently. This temporal ambiguity complicates the role being claimed for AI in recovery whilst also complicating what recovery is. Temporality has long been central to discussion of recovery. For example, literature framed by the biomedical/disease model suggests that disturbances in the experience of time are a commonly reported feature of mental disorders. Also working within a biomedical/disease model, Eugene Minkowski, noted that with those diagnosed with schizophrenia “experienced time is altered in its flow, being experienced as frozen, immobilized, without “elon vital”. Experience of time is also affected by specialization – that is time is felt as divided in juxtaposed elements – and temporal fragmentation: “With the fracturing of time flow, we observe an itemization of now-movements in consciousness so that each now-movement in a person’s stream of consciousness will be experienced as detached from the previous one and from the following, hence as extraneous to one’s stream of consciousness and sense of selfhood.” 31 Time continued to be a methodological tool and pathological focus of clinical psychiatry and psychology throughout the first half of the 20th century – and Mnkowski played a key role in shaping subsequent understandings of abnormal time and depression and schizophrenia. 32
Amongst other things, this literature also notes a “subjective decrease in the experienced velocity of the flow of time in depressive disorders” 29 and that “depressed patients” showed “general changes in their experience of the passage of time. The most common was that of time having slowed down.” 29 Perhaps in contradiction, Vogel et al. 29 also found some indication of an increase in the experienced velocity of the passage of time. As noted further below, it therefore becomes important to consider how chatbots frame and portray the experience of time. Words like relapse, recurrence, deterioration, periods of improvement remain common within biomedical discussions of depression. As Fava and Visani 33 suggest, for example: “clinicians working with depressed patients are often confronted with the unsatisfactory degree of remission that current therapeutic strategies yield, and with the vexing problems of relapse and recurrence”. Yet, as illustrated in the abstract above, the literature on chatbots makes redress to “short and long term” positive outcomes. In doing so, it appears to move us away from expressing and experiencing recovery through a language of relapse, recurrence and deterioration. Chronicity remains in uncertain ways and the language of “outcomes” sidesteps, rather than resolves, tensions that inhabit notions of recovery.
Research questions
In essence, whilst the academic literature is clear in situating chatbots as an emerging hope for treatment, therapy and training there is both (i) little overt mention of recovery and where it does occur it appears rendered as “outcome”; and (ii) potential for significant impacts on understandings of recovery, particularly in connection with expertise, technology and temporality. Within this paper we build on this by looking in more depth at the sociotechnical assemblage of the chatbot. Technologies are sociotechnical objects that have been co-produced in a particular historical and cultural context.34,35 Values and politics are incorporated into the design of technologies and they are likely coded with ideologies that reflect and refract contemporary knowledge claims to mental health and recovery. At one and the same time, “technologies have both
Reflecting the discussion above, the paper asks: how is recovery situated by chatbots? What models are being used and what models are excluded? How are chatbot users situated in terms of “need’? How are expertise and the capacities of the technology framed and discussed? How is temporality situated?
Methods
This paper focuses on the ways official online materials explain and make sense of chatbots, their imagined functionality and value for (potential) users. 16 Our interest is in how chatbots are positioned and the claims being made for them, rather than the experience of using the chatbot in itself. As apps are interpreted by surrounding materials that give them meaning, 36 empirical data reported on in this paper is comprised of publicly available online materials. Materials are sourced from official company pages and include blog posts, promotional videos, marketing materials and user guides.
As noted, chatbots have been developed for a range of tasks including therapy, training, screening and improving social skills. It has also been proposed that voice signal extraction can be used to detect mental illness. 37 Within the present study, we adopted a purposive, homogeneous, sampling strategy 38 and focused on those offering forms of therapy. Sampling proceeded as follows: First, a decision was made to explore those apps which are situated as offering assistance for depression/anxiety. This is because reviews suggest that this is the most common focus of chatbots. 19 Second, three chatbots for mental health were selected for analysis: Woebot, Wysa and Tess. These are three of the most popular chatbots; for example, appearing at the top of Clarke’s 39 list of “AI Bots and Apps for Depression”. Woebot launched in the Summer of 2017 and is “designed to offer convenient care to those struggling with depression by mimicking human conversation, offering self-help related guidance and companionship to its users”. Woebot uses chat-based functions to mimic online human interactions. It is built around principles from Cognitive Behavioural Therapy (CBT) and uses natural language processing to “get to know” users over time and “more accurately detect and meet your emotional needs at a given time, offering personalized resources, self-help guidance, information and support related to your concerns”. It won the google play award for “standout wellbeing app” in 2019. Wysa is reported to have around 1,500,000 users. Like Woebot, both Wysa and Tess incorporate principles of CBT. However, Wysa is also built around dialectical behavioural therapy, meditation practices and motivational interviewing. Tess moves away from app based approaches and uses text-based messaging to track user goals and provide guidance and interventions. 39
These chatbots are developed outside of mainstream mental health services and can be expected to be driven, to some extent, by commercial interests and the desires of potential customers, including mass healthcare purchasers such as in the UK the National Health Service. Yet, at one and the same, whilst it might be assumed that chatbots will draw from a biomedical framework because of this, developers also stress the important role that users play in their evolution. For example, the Wysa website states that “Over 60 psychologists and 15,000 users have provided specific inputs to shape how Wysa helps them. 65 users have volunteered to help us translate Wysa into their language.” Materials analysed are thus formulated within a complex space that includes developers, psychologists, “users”, consumer concerns and healthcare commissioning rules. Materials were systematically collected. For each chatbot, we began with the official company webpage. These pages were downloaded, along with anything that was directly linked. For example, the Woebot website lists two scientific references on its “how it works page”. These were also downloaded and analysed. The Tess website also includes a series of references as well as embedded videos from both clinicians and users. As Berg 16 notes in his analysis of self-tracking devices, “Since all of these materials are part of the companies” marketing strategies and social media presence, they are regarded as not only official presentations of the products, but also as naturally occurring empirical materials.” Online Appendix 1 lists the sources analysed and accompanying urls. All sources were analysed between August 2019 and February 2020. Links are accurate as of then.
Analysis proceeded in three steps. First, materials were examined for explicit reference to “recovery”. Where the word recovery was present, we reflected on the context and framing and the discursive resources being deployed. Second, a wider reading was conducted foreshadowed by the ideas and questions from the critical literature review and materials were read sensitized to possible struggles over expertise, technology and temporality that implicitly invoked recovery. Finally, we drew these two sets of observations into the set of cross-cutting themes presented below. This discussion is not an exhaustive set of findings but focuses on sets of tensions we consider particularly pertinent to the development of future notions of recovery.
Findings
Across Woebot, Wysa and Tess there is largely an absence of overt reference to “recovery”. Exceptions include the statement on the Tess website that “92% of people moved towards recovery” (Tess_2) and Woebot’s claim that “struggle is actually a necessary part of recovery” (Woebot_18). There is no further elaboration on the statement and it is instructive that it appears on a page titled “outcomes”. Across the three chatbots, outcomes dominate as an idea. The “science” page of the Woebot website, for example, has very little information beyond a graph showing “reduction in depressive symptoms” following use of Woebot (Wobebot_2) Readers can follow a link to a paper by Fitzpatrick et al.
40
which reports that “Intent-to-treat univariate analysis of covariance revealed a significant group difference on depression such that those in the Woebot group significantly reduced their symptoms of depression over the study period as measured by the PHQ-9 (F=6.47;
There is a prima facie case, then, for suggesting the conversational agents are invoking an idea of recovery that involves smooth and linear reduction in symptoms over time and locates the agency for bringing about that recovery in the app. Debates surrounding diagnosis are made redundant by claims that the chatbots are “agnostic when it comes to diagnosis” (Woebot_18) and very little critical attention is paid to who is deciding which outcomes are core or to the standard questionnaires and measurement devices employed in the clinic to capture these outcomes. Screening tools arguably
Further analysis does suggest complex and competing ideas which coexist with this narrow view of recovery. In making these apparent, we firstly discuss three themes that map on to the struggles over practice, expertise and evidence 1 foreshadowed by the literature review: “troubled clinical responsibility”, “extended virtue of (technological) self-care” and “altered ontologies and psychopathologies of time”. Within the concluding discussion we explicitly connect these back to recovery.
Troubled clinical responsibility
A video on the Tess site focuses on Lloyd Werk MD, MPH; a person deliberately identified through their professional status and medical qualifications (Tess_9). Dr Werk outlines how the clinical encounter usually involves face-to-face meetings to engage patients and identify goals. Following this a coach may meet with them to monitor progress. The problem with this model, according to Dr Werk, is the wait between meetings and an extensive wait list. Further to this, it misses much of the therapeutic work that happens at home. Dr Werk emphasises the traditional nature of the therapy by noting how Tess enables the patient to stay in touch and hold many of the same conversations that a coach and therapist would have with the client. Dr Werk notes how he gets the analytics for his patients and can reflect on frequency and type of conversation at the next face-to-face consultation. Somewhat similarly, Woebot emphasises reach and enhancement of access when it stresses that they are “on a mission to make high-quality tools for mental health radically accessible to everyone” (Woebot_3). Woebot also claims it “will never replace therapy or therapists, and it’s not trying to do” but instead they “will exist side-by-side” (Woebot_15).
Reflecting Pickersgill’s
17
discussions of biomedical virtue and clinical responsibility, these materials situate chatbots as extending clinical care but not replacing it. Traditional forms of clinical responsibility are upheld, even where the relationship between therapy and technologies operates in complex ways. For example, Wysa documentation makes clear links to AI and outlines how the chatbot was designed by a wide group of people – which includes therapists, coaches, users and AI folk. In doing so, Wysa also appears to be situated as individual, personal, and different from traditional forms of therapy. It is about your own
Technology is also discussed in a way that reaffirms the salience of the clinician and clinical encounter. The official Woebot Facebook page hashtags “AI” in numerous posts (Woebot_22). Technical magazines describe Woebot as “An AI-based therapy chatbot to help with depression and anxiety’ 42 and Andrew Ng announced his appointment to the Woebot board of directors under the title “Woebot: AI for mental health”. 43 Woebot also announced on the 26th September that it had been named a finalist in the category of “Best application of AI”. Yet, on the official pages little note is made of the AI behind Woebot (although see Woebot_19). Perhaps reflecting the historical concern that expert level cognition is distinctly human44,45 there is also the claim that “chatbots are not about AI, they’re about a more human interface” (Woebot_12). Altering the way in which the technology is framed ensures that the main “agents” of expertise included in the Woebot materials are clinicians and the user remains absent as an agent of expertise.
This is not to say that traditional forms of clinical responsibility remain entirely untroubled. For example, across the materials analysed, Woebot is situated as being different from traditional therapy. In an article linked to the company twitter feed (Woebot_22), Woebot is described as “fundamentally different from any other form of therapy.” “The Woebot experience doesn't map onto what we know to be a human-to-computer relationship, and it doesn't map onto what we know to be a human-to-human relationship either … ”It seems to be something in the middle.
46
At one and the same time, discursive resources are deployed to position Woebot as the same as traditional therapy. Official pages state that “the popular opinion about therapy is that it holds a kind of special magic that can only be delivered by individuals who are highly trained in this mysterious art form” (Woebot_15). Instead “the truth is that modern approaches to mental health revolve around practical information gathering and problem solving” (Woebot_15)– something which, although not explicitly stated, computers are renowned for. Woebot’s traditional forms of expertise are continually championed, with other pages stressing that the creators are “psychologists who built programs in the clinic” and “worked at Stanford for over 10 years” (Woebot_2). They ask, “Don’t you need a therapist?” as a response they claim that there are “20+ years of rigorous research to show that DIY CBT works” and that CBT delivered via the internet (and even video games) can be as effective as therapist-delivered CBT for both anxiety and depression (Woebot_2). At the same time as being framed as both “different” from, and the “same as” traditional therapy, official pages further appear to position Woebot as “better” than traditional therapy. For example, the landing page of the company website links to an article in Wired magazine: “Woebot’s creators believe it has the potential to actually
Perhaps ironically, the most significant threat to traditional virtues of clinical responsibility is a continued, uncritical, emphasis on Cognitive Behavioural Therapy. Cognitive Behavioural Therapy – (CBT) a dominant mode of treatment used in contemporary mental health practice 48 – is almost hegemonic in its status: “our Enlightenment heritage calls for a rationalist ordering of the therapies, in accordance with narrow and pre-constructed values that correspond with those of society's most powerful institutions. It is in this context that we should understand CBT's overwhelming emergence as the therapy of choice. The risk of its institutional success is the establishment and legitimisation of a therapeutic hegemony, and the gradual diminishment of a once rich landscape of therapeutic possibilities”. 49 In stressing the idea that CBT delivered via the internet (and even video games) can be as effective as therapist-delivered CBT, Woebot plays down the innovative nature of chatbots and instead inserts continuity with the established authority of CBT. In doing so, it problematizes the “expertise” of any human actor whilst implicitly borrowing CBT’s embedded view of recovery.
Extended virtue of (technological) self-care
The troubling of clinical responsibility – or not – operates alongside an extended virtue of self-care. As we argue in this section, this becomes most apparent when we examine the ways in which the clinical and everyday are situated. Woebot, Wysa and Tess normalise the need for conversational agents as everyday objects. For Wysa, “sometimes we all get stuck inside our heads” (Wysa_1). For Woebot, “Everybody could use someone like me” (Woebot_18); whereas Tess states “When you say something in a certain way, a good friend will know how you actually feel, it’s the same thing with Tess” (Tess_16/Tess_14) and “When I feel stuck, Tess helps me look at the situation in a different way” (Tess_14).
Attempts at everyday normalisation can be seen elsewhere in the materials analysed. For example, Tess is described as an “entirely new form of care” which is affordable and scalable (Tess_10). The site states that most people prefer Chat with Tess over “traditional therapy” (Tess_13). Under the “for individuals” page, the Tess website contains a short video testimonial from a female user (Tess_14). The user narrates how she was looking for an app that would help her manage stress and build self-awareness. Tess is compared to a Tamagotchi toy from her childhood. Popular in the late 1990s the Tamagotchi toy is a handheld digital pet. Players are required to care for the pet and outcomes depend on their actions. Toys invite children to rehearse certain kinds of orientations to the world and the Tamagotchi toy invites children to “an ongoing movement between two spaces, the “actual” and the “virtual”, a computer-generated space that technologically enlarges the actual living space of the children.” 50 At one and the same time, both the technology and the movement between spaces required by Tess are normalised and naturalised as a caring companion – similar to a friend or a childhood toy. This normalisation runs contrary to the grand promissory claims for AI and chatbots seen in the literature review above. It also suggests a different orientation to recovery to the one invoked by reference to CBT. Ideas of recovery are almost side-lined as the focus remains on the present and chatbots become the care that we all need to deal with our everyday emotions.
Altered ontologies and psychopathologies of time
As noted above, clinical discourses are included across the materials. Woebot, Wysa and Tess, for example, highlight research on outcomes such as PHQ-9 Scores and depressive symptoms. However, these clinical discourses are largely left to exist in silos to the everyday normalisation also noted above. This leaves the ontology of mental illness troubled. Labels such as anxiety and depression become ambiguous and simultaneously situated – cf. Szasz’s myth of mental illness 51 – as general, pervasive, problems of living, and as “real” clinical concerns. This ontological troubling is accompanied by ambiguity surrounding medicalisation. For example, when the user in the Tess website references bereavement, they call upon something that for many years was used as an exclusion in the diagnosis of major depression. 52 When DSM-5 removed this exclusion, critics were concerned that this would “medicalize” ordinary grief and encourage over-prescription of antidepressants. 52 At the same time as troubling the ontology of mental illness, then, chatbots have the potential to extend its medicalisation where it situates mental illness as a “real”, clinical concern, but one that is part of an epidemic and general modern malaise.
Temporality plays a key role in these antagonisms. The materials analysed contain little overt mention of relapse, recurrence, deterioration or improvement. There is also no clear statement as to when one might consider stopping using the chatbot. Within the user video embedded within the Tess website, the main protagonist discusses how she does not visit the chatbot as much now, but it is good to know that it is still there when/if she does. This narrative of continued use illustrates further the extension of the virtue of self-care. Where temporality does appear, it is because others might not be awake when you need them.
Temporality creates its own tensions across the three chatbots. On the one hand, there is an emphasis on outcomes with Tess, for example, making clear that chatting with Tess leads to reduced symptoms of depression (–28%) and anxiety (–18%). Yet, ideas of temporality are most prominent away from these discussions of outcomes and are focused more on the timing of delivery rather than anticipating the upshot of the intervention. As McWade,
1
notes, a key aspect of the socio-political struggle which surrounds recovery is temporal and within the section below we turn all this back to the overall research question and ask: to what extent are chatbots shaped by, and shaping, the concept of mental health recovery
Concluding thoughts – What then of recovery?
Following Williams et al.
18
firm conclusions may be unwise, given the embryonic nature of these developments. However, a number of tentative conclusions may be drawn. For one, multiple versions of recovery appear to be approximated within the materials. On the one hand, there appears to be a discursive move towards a recovery “Recovery from serious mental illnesses involves the amelioration of symptoms and the person’s returning to a healthy state following onset of the illness. This definition is based on explicit criteria of levels of signs, symptoms, and deficits associated with the illness and identifies a point at which remission may be said to have occurred. This definition thus has many advantages from clinical and research perspectives, as it is clear, reliable, and relatively easy to define, measure, and link to dysfunctions or wellbeing in other areas of life. People who enjoy this sense of full recovery could be considered to have recovered from psychosis in the same way that other people may recover from an infection, a broken leg, or, in the case of recovery over a longer period of time, asthma”.
26
Notwithstanding this, a third form of recovery is also made apparent across the contradictions and areas of debate within the materials. Both the recovery
“Personal recovery” is increasingly individualised and the experiences of the user may become further alienated from their own recovery. When we talk of recovery from versus recovery in, scientific versus consumer models of recovery, clinical versus personal recovery or versus social recovery 56 we are often talking about whether it is clinical or personal expertise which is brought to the fore. The conversational agents explored here potentially trouble this dichotomy by situating biomedical virtue within Cognitive Behavioural Therapy itself rather than the individual user or therapist. It may be more appropriate then to suggest that chatbots are being shaped by, and shaping, (im)personal recovery imperatives.
Further research is needed here. The limited scope of the current study means there remains a need to delve deeper into the positionings embedded within use of the apps. “Walkthroughs” offer the possibilities for further exploration of digital applications” “sociocultural representations as much as its technological features or data outputs, which also have social and cultural influences.” 57 The “walkthrough” method requires researchers to go through individual chatbots paying attention to such things as (I) the app’s vision; (ii) its operating model; (iii) its governance; (iv) mediator characteristics, such as how the app guides users through activities via menus and buttons; (v) registration and entry; (vi) and app suspension and closure. 57 Such approaches can provide data on intended purpose, embedded cultural meanings, and implied ideal users. 57 Applying this method to a range of chatbots can shed light on how different design contexts impact on the ways that recovery is embedded within these technologies. Capturing stakeholder meanings and motivations is also important moving forward.
As well as further exploring how chatbots are shaped by, and shaping, recovery and asking what models are excluded and what are the processes of truth-making, future research needs to consider how social scientists might work with key stakeholders to challenge enduring hegemonic forms. It is also important that research engage with the question of whether we
Supplemental Material
sj-pdf-1-dhj-10.1177_2055207620966170 - Supplemental material for Conversational agents and the making of mental health recovery
Supplemental material, sj-pdf-1-dhj-10.1177_2055207620966170 for Conversational agents and the making of mental health recovery by Robert Meadows, Christine Hine and Eleanor Suddaby in Digital Health
Footnotes
Contributorship
RM and CH conceived the study. ES and RM conducted analysis of materials. All authors conducted literature reviews. RM wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by an internal grant from the University of Surrey. Meadows and Hine are also currently supported by a BA/Leverhulme Small Research Grant (SRG1920\100730).
Guarantor
RM.
Peer Review
Ewen Speed, University of Essex and Ian Tucker, University of East London have reviewed this manuscript.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
