Abstract
Digital phenotyping for mental health is an emerging trend which uses digital data, derived from mobile applications, wearable technologies and digital sensors, to measure, track and predict the mental health of an individual. Digital phenotyping for mental health is a growing, but as yet underexamined, field. As we will show, the rapid growth of digital phenotyping for mental health raises crucial questions about the values that underpin and are reinforced by this technology, as well as regarding to whom it may become valuable. In this commentary, we explore these questions by focusing on the construction of value across two interrelated domains: user experience and epistemologies on the one hand, and issues of data and ownership on the other. In doing so, we demonstrate the need for a deeper ethical and epistemological engagement with the value assumptions that underpin the promise of digital phenotyping for mental health.
This article is a part of special theme on Digital Phenotyping. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/digitalphenotyping
Introduction
Digital phenotyping for mental health represents a growing and heterogeneous field, loosely connected around the ambition of harnessing the data generated by digital technologies to both predict and monitor mental ill health (see e.g. Torous et al., 2016). Digital phenotyping often draws on smartphone-based data which makes it easier to collect self-reports from individuals so that “[…] the act of measurement no longer needs to be confined to research laboratories but instead can be carried out in naturalistic settings in situ […]” (Torous et al., 2016, not paginated). Further, it is often suggested that using data from sensors in smartphones also allows researchers to access “objective measures of behavior” (Torous et al., 2017: 1). The hope of many working in the field is to use smartphones to both deliver more frequent self-assessments and to combine this with data that provide information about the individual and their engagements with their lifeworld. This commentary explores for whom digital phenotyping is constructed as valuable; we wish to interrogate what that construction of value hides or overlooks, enacts or reinforces.
Digital phenotyping for mental health is a rapidly developing field, which is only slowly coming under epistemological, methodological and ethical scrutiny by social scientists and philosophers (e.g. Birk and Samuel, 2020; Cosgrove et al., 2020). In beginning to map out the values embedded in such a broad and emerging field, this brief commentary will be similarly broad. But it is also important to note that there is considerable diversity within digital phenotyping and, indeed, within the wider field of digital psychiatry (on digital psychiatry, see Pickersgill, 2019); both contain a range of diverging goals and promises. Coming from the disciplines of ethics and social science and with direct experience in digital phenotyping projects (see e.g. Lucivero & Hallowell, this issue), in this commentary we critically analyse digital phenotyping's promises. We do this by articulating the often-hidden assumptions embedded in the promises of digital phenotyping, and by making explicit their implications.
Digital phenotyping as a promissory device
A recent report on “The digital futures of mental healthcare and its workforce” (Foley and Woollard, 2019) stated that “Technology seems likely to change our understanding of mental health […] a new understanding of mental health/illness aided by digital phenotyping, genotyping and neuro-imaging will challenge current diagnoses and enable a more personalised approach to treatment.” (Foley and Woollard, 2019: 20). Indeed, the report proposed that in a timescale of “3–10+ years”, machine learning will solve a host of problems within the field of mental health, from providing support for clinicians’ decision-making about diagnosis to “phenotyping individuals”, “risk/event prediction” and “isolating digital biomarkers” (Foley and Woollard, 2019: 18). This statement demonstrates the high hope that digital technologies that can identify digital biomarkers via digital phenotyping will transform how those in the throes of mental ill-health are identified and cared for. This foregrounding of the “potential this technology offers to healthcare” (Spinazze et al., 2019: 2) frames the value of digital phenotyping as resting on an unproblematic and linear trajectory from research to clinical practice constructed around the goal of bringing benefit to those with mental ill-health. This sets up digital phenotyping as a promissory device carrying the ambition of transforming the landscape of mental health; it promises future value for clinicians and, by extension, for service users.
Promises create excitement about the potentials of digital technologies, and act performatively to accrue further funds and resources, establish networks, and drive innovation (Brown, 2003). Promises create a sociotechnical imaginary of “social life and social order” which describe and prescribe attainable futures (Jasanoff and Kim, 2009: 120). These imaginaries are almost always imbued with implicit understandings of what is good or desirable in the social world (Jasanoff and Kim, 2009). For example, in a review of digital phenotyping studies in college students in the United States, Melcher, Hays and Torous argue that the “ultimate goal of digital phenotyping with college students is not only to understand why students experience these mental health problems, but also offer more personalised and responsive care” (Melcher et al., 2020: 165).
An imagined future in which digital phenotyping is used for the treatment of mental health problems arguably frames this technology as a moral good that will unequivocally bring benefits to those with mental ill-health. Such an imaginary shapes the trajectory and legitimacy of digital phenotyping as valued whilst at the same time serving to obscure the social values on which it rests. As Pickersgill has argued, the promises and “biomedical virtues” that become enacted around these technologies are a crucial part of how the members of new fields, such as digital phenotyping for mental health, “act, in effect, to talk these [fields] into existence” (Pickersgill, 2019: 25–26). This necessitates critical reflection on the regimes of value embedded in digital phenotyping, as well as their relationship with dynamics of power and knowledge.
The value of user-perspectives in digital mental health
A value embedded in digital phenotyping which has potentially far-reaching implications for social life is its quantification of mental distress through a privileging of objective data (Cosgrove et al., 2020). The promise of digital phenotyping may appear to be person-centred: it offers the potential of care that is tailored to the specificities of a person's mood or distress through minute-by-minute interactions of self and machine. Yet, in potentially drawing objective measurement into conflict with service users’ subjective reports of experience, digital phenotyping's implicit promise to centre the person comes to look flimsier. Its return to objectivity revives wider critiques of power in and of psychiatry. It lays bare the paradigms of knowledge production on which those dynamics have traditionally rested and sounds a warning bell about how digital phenotyping risks moving forwards without necessarily moving beyond.
Against the background of the fraught history of institutionalised power in psychiatry, survivor-led social movements have called for the “experiential knowledge” (Beresford, 2003) of service users to be valued alongside other sources of knowledge in mental healthcare. If power is that which “may potentially open up or close off opportunities for individuals or social groups” (Tew, 2002: 165), the foregrounding of objectivity over experience risks resurrecting and re-entrenching past power differentials under the guise of improved care. This raises conceptual and empirical questions around knowledge production and ownership: Can digital phenotyping fit into the wider landscape of mental healthcare, and its growing recognition of the value of subjective meaning to person-centred care and recovery? What part would service users play in this new, or perhaps renewed, quantitative imaginary; in the conflict between experiential and machinic knowledge, which would be listened to or acted on, for example? And, what implications might this have for service users’ autonomy, such as when faced with enforced treatment?
This potential deprivileging of service user experience and autonomy embedded in digital phenotyping shows this technology to be part of a wider landscape of what Heidi Rimke has termed “psychocentrism” (Rimke, 2016). This emphasizes individualised understandings of mental distress, seeing this as both innate to a person and inherently pathological. The micro-monitoring of daily life that digital phenotyping would make possible risks further entrenching such an imagining of mental distress, and emphasizing the micro-management of symptoms. This could disconnect those symptoms from the broader portrait of a person's life, thereby overlooking the social suffering (Kleinman et al., 1997) and structural inequalities that may underlie mental ill-health. This has implications at both an individual and societal level. Given that the history of psychiatry has been profoundly enmeshed with the drawing of societal boundaries between ‘normal’ and ‘pathological’ (Canguilhem, 1978), there is a need to critique how each of these constructs is both implicit in, and re-enacted by, digital phenotyping.
Thus, whilst the data afforded by digital phenotyping might be “highly insightful and easily retrievable” (Spinazze et al., 2019: 1), it is crucial to consider the value constructs they are based on. This seemingly new technology risks constructing and reinforcing social difference along old templates.
The value of data
Furthermore, together with the assumption of objectivity, the value of digital phenotyping also resides in its collection of critical new data for mental health research. Digital data are an increasing source of power and wealth (Prainsack, 2020) and the involvement of Big Tech corporations in the health ecosystem is a manifestation of the monetary value of digital health data (Sharon, 2016). Tech companies provide the infrastructure for digital phenotyping: sensors for data collection, software for data processing, platforms for data storage, management and analysis. At the same time, it has been shown that the public feels uncomfortable about the involvement of tech corporations in healthcare (Powles and Hodson, 2017) and private companies’ access to health data (Wellcome Trust, 2016). These worries concern third parties’ (unauthorised) access to sensitive data and the malevolent use they can make of these, resulting in discrimination of individuals (for example, by employers or insurance companies) and stigmatisation of specific groups. Another concern is that the economic value of data is distributed unfairly: while service users and citizens take up the labour involved in data collection and become exposed by allowing access to their personal information, big tech actors make higher gains than they return in terms of products or services. Both concerns share the understanding of data as assets, bearing an economic value that needs to be acknowledged, regulated and fairly distributed across actors rather than capitalised on and controlled only by some.
While some activists and scholars advocate that patients and citizens should own their data, and propose models for commercial transactions (Kish and Topol, 2015), others criticize such a position and argue that a private property model does not apply to data (Purtova, 2017). Acknowledging that people have a relationship with their data does not imply an ownership relationship: for example, data belong to people in the sense that their data refer to their identity and who they are (Ballantyne, 2020). People may value data because they put labour in the process of data collection and interpretation. This raises a question regarding how to ensure that people's relationship to their data is acknowledged respected and not exploited by private actors.
This question becomes even more compelling in the context of digital phenotyping for mental health, where populations are particularly vulnerable and power imbalances are already so embedded in the process of knowledge production. In the previous section, we questioned the value that digital phenotyping's promises attribute to the objectivity of digital biomarkers and highlighted that in doing so they fail to acknowledge the value of users’ perspectives in the production of knowledge. In a similar vein, here we warn that, as these promises attribute value to data collection, they need to be careful to acknowledge users’ labour and relationships to it. Moreover, in a context wherein data are assets with a monetary value, it must be asked how the active role of users of digital phenotyping should be acknowledged and compensated fairly.
At a time where the value of data has entered our political economy, transparency is pivotal to address the unbalanced power relationship between big tech companies and citizens. The role of private companies, the way data is accessed by them, the level of protection of participants’ personal data, their value model, are all aspects that need to be available for participants. Providing this information in a transparent way is a first step in order to avoid research participants/future patients being disenfranchised (Ballantyne, 2020) and/or exploited. Further steps concern the governance structure that is in place to ensure that there is a fair balance between commercial benefits of private companies and individual costs (see Prainsack, 2017).
Valuing digital phenotyping?
We have sought to draw out some of the tensions around value that pertain to digital phenotyping in terms of its current use and also in relation to the ‘promise’ of its potential for mental healthcare and research. In doing so, we have elucidated how the ‘value’ of this technology is neither given nor intrinsic, but socially constructed – produced in and through social practices and embedded in social imaginaries (Datta Burton et al., 2021) that are historically rooted.
Digital phenotyping for mental health is constructed as valuable in part because of its promised ability to screen and intervene and to provide “digital biomarkers” where the search for actual biomarkers has failed (Cosgrove et al., 2020). But it is exactly the common focus on digital phenotyping as valuable for clinical research and mental healthcare that simultaneously obscures and hides other questions of value. This focus renders it substantially harder to question the value of quantification that is foregrounded, and, indeed, to discuss the epistemologies implicit therein. This focus also makes it harder to consider the epistemological and economic value of data to address questions of fair distributions of benefits and burdens, and how to ensure that they are not disproportionally distributed to reproduce existing power asymmetries. Taking value as given arguably limits an imagining of value to the singular; it leads us to see value as only embedded in what the technology can produce and overlooks the values on which it rests and which it re-enacts. Such a focus thereby obscures the multi-dimensional aspect of value, including, for example, individual, social, clinical, epistemological, ethical and economic values.
We recognize that our commentary has drawn on well-known critiques of psychiatry and that this risks obscuring the possibilities inherent in new, digital developments. Perhaps digital phenotyping can bring forth a truly new, digitalized, automatized and beneficial epoch for mental health. If this is the ambition it may be beneficial to follow Datta Burton and colleagues (2021) who have argued that a better way to construct questions of value would be to ‘flip them around’: rather than asking what value can come from digital phenotyping, a more relevant question asks what is valuable in terms of delivering better mental health provisions for service users, how to get there, and what role technology can play. This necessitates considering questions such as: whose perspectives are being included? Who is this technology for? Might it be different? This would open up the space to develop new uses and values of diverse forms of digital phenotyping.
Footnotes
Author's note
Gabrielle Samuel, Wellcome Centre for Human Genetics, University of Oxford, UK.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Rasmus Birk received funding from the Independent Research Fund Denmark, Grant Number 8023-00013B.
