Abstract
Presented as providing cost-, time- and labour- effective tools for the (self)management of health, health apps are often celebrated as beneficial to all. However, their negative effects – commodification of user data and infringement on privacy – are rarely addressed. This article focuses on one particularly troubling aspect: the difficulty of opting out of data sharing and aggregation during app use or after unsubscribing/uninstalling the app. Working in the context of the new European General Data Protection Regulation and its implementation in the UK health services, our analysis reveals the discrepancy between the information presented to users, and the apps’ actual handling of user data. We also point to the fundamental tension in the digitisation of health, between the neoliberal model where both health and data concerns are viewed as an individual’s responsibility, and the digital-capitalist model, which puts forward, and capitalises on, collective (‘Big’) data. Pulled between the ‘biopolitics of the self’ and the ‘biopolitics of the population’ (concepts coined by Btihaj Ajana), opting out of health datafication therefore cannot be resolved as a matter of individual right alone. The article offers two contributions. Methodologically, we present a toolkit for a multi-level assessment of apps from the perspective of opting out, which can be adapted and used in future research. Conceptually, the article brings together critical digital health scholarship with the perspective of data justice, offering a new approach to health apps, which focuses on
Keywords
Introduction
‘There’s an App for that!’ A catchy Internet phrase of mid 2000s, initiated and subsequently trademarked by Apple, captures what has become an everyday reality for most digital economies. With the widespread use of smartphones, we are increasingly reliant on apps that are either pre-loaded or can be voluntarily/casually and mandatorily/professionally downloaded, to manage all areas of our consumer, work, political and personal lives. The sheer volume, range, speed and breadth of different types of apps available in the market today demonstrate how this all-encompassing ‘appisation’ is now an inevitable part of contemporary digital life.1–3 It is impossible to talk about digital health today without referring to appisation (or ‘mHealth’/‘mobile Health’ as the process is more commonly described in medical circles). According to Satista, ‘The health and medical industry have been named as one of the top three fields to accelerate the growth of mobile devices’. 4 At present, countless apps are being developed and offered to individual smartphone users to manage healthy lifestyles or to support specific medical and health conditions; the apps are also extensively integrated into public and private health service provisions.5,6 However, the invisible yet detrimental by-products of health appisation – such as infringements on privacy, data monetisation and long-term digital profiling – are rarely understood by the medical practitioners and health service providers advocating for the apps. Nor are these always clear to the users, lured by the speed and convenience of ‘on demand healthcare’, usually advertised as an ‘affordable and accessible service at one’s fingertips’ (to use the words of Babylon Health, one of the leading health apps). 7 Health appisation is thus presented as an unquestionably positive process, devoid of dangers and beneficial to all – a celebratory framing that both conceals and supports the apps’ data economy of ‘surveillance capitalism’ 8–11 that relies on users’ willing but often unknowing participation and continuous personal ‘contribution’ of their data.
While health appisation may indeed have revolutionised some aspects of healthcare, its broader and often perturbing socio-political effects are yet to be explored in the academic domain of digital health. Deborah Lupton
5
recently argued that most studies on health apps to date come from medical or public health literature, which focuses primarily on an instrumental approach to apps’ effectiveness or medical validity. What is yet to be interrogated are ‘[t]he wider social, cultural and political roles played by health and medical apps as part of contemporary healthcare and public health practice’
5
(p. 607) – a task she sets out for ‘critical digital health studies’. In this article, we want to push Lupton’s idea of the critical digital health framework further. We want to challenge some assumptions surrounding the status quo of health app studies by exploring one specific and particularly disturbing aspect of health appisation: not only the surrender of one’s health data, but more worryingly, that once the data is integrated into the apps, there is very little room for
Therefore, the central concern of our article is the question of opt-out as a legal, social and technical possibility, and as a citizen and user right. Our approach aligns with the emerging body of scholarship on ‘data justice’,12–15 which, as Linnet Taylor formulates, should include ‘the freedom not to use particular technologies, and in particular not to become part of commercial databases as a by-product of development interventions’ 14 (p. 9). Taylor’s framework of data justice was developed when looking at international development, humanitarian intervention, refugee border crossing and population surveillance – in other words, in contexts of precarity and systemic vulnerability, where power and governance increasingly rely on digital data, and where all-encompassing digital surveillance often manifests as a necessary ‘care’. The leap from here to digital health might seem far-fetched; however we argue that it is precisely the double-bound nature of care versus surveillance that makes the perspective of data justice crucial for critical digital health studies. Because of the intimate relationship between personal health and personal data, which the process of appisation transforms, questions of opting out are of particular urgency to the field of digital health, extending far beyond the domain of the theoretical. In the context of data economies and the continuously increasing appisation of healthcare, these questions are also acute for the general public, medical research bodies and care providers, including private enterprises and national health services. In our discussion, we focus less on the pressure that medical systems put on some patients to use technology as part of their care (for example, apps developed for specific medical conditions). In other words, we are not addressing individual autonomy over choices in medical treatment and management. Rather, our focus is more on the use of health apps generally, understood through the lens of one’s rights to one’s data, beyond the context of medical treatment.
Combining our expertise in digital sociology, cultural studies, digital health, computation and data visualisation, this article aims to make two key contributions to simultaneously advance critical debates in the field of ‘critical digital health studies’
Our second contribution is a toolkit for multi-level evaluation of the apps themselves. The evaluation process we propose – which is outlined in more detail in the next section – resembles the ‘app walk through’ method, developed by Light et al., where a critical study of the app involves ‘systematically and forensically step[ping] through the various stages of app registration and entry, everyday use and discontinuation of use’ 21 (p. 881). Yet the uniqueness of our toolkit is that it is grounded in the paradigm of digital disengagement, and as such, differs fundamentally from most methodological approaches to the study of health apps. A substantial portion of current health app studies comes from the abovementioned tendency to normalise engagement, thus mostly investing resources into how the apps work or how they can be improved – whether through app development or changes in user experience.22,23 By challenging the norm of digital engagement and by exposing the risks and the traps – rather than the promise and the potential – of health apps, we are laying a path for future research that can productively challenge the ways in which digital technologies are often embraced too quickly and without sustained critical consideration. Our toolkit is thus intended for wider use and it is hoped will benefit academic researchers, health app developers and testers, as well as health practitioners and of course the users themselves.
Context and methods
In this article, we take the UK as a case study to develop and test our toolkit, while also demonstrating the applicability of our framework more broadly. In considering the current socio-legal landscape of digital health and opting out in the UK, we reveal how localised processes regarding opt-out are simultaneously a reflection of nation-specific legal and economic conditions
Between the global data economy and the national/regional data rights
As in many other digital economies, the health sector in the UK is increasingly expecting patients to manage their own health and wellbeing through apps. As such, health apps are widely offered by private companies and the public sector, including the National Health Service (NHS) and especially NHS Digital, ‘the national information and technology partner to the health and care system’. 24 The incorporation of apps into NHS Digital is the latest development in the longer tradition of phone and online patient support, designed to reduce the workload of general practitioners (GPs) and hospitals, and in this sense, it should be understood within a broader context of continuous budget cuts leading to diminishing resources for face-to-face and on-site patient support. At the same time, such developments are also part of the general move towards ‘self-responsibilisation’ for one’s health25,26 within the wider Euro–American neoliberal context.
At the time of our study in 2018, two major legal developments took place, pivotal for understanding the current opt-out landscape in the UK. Firstly, the European General Data Protection Regulation (GDPR) came into force in May 2018 and was fully adopted in the UK (despite the uncertainty surrounding the process of Brexit). Unlike the earlier UK Data Protection Law, GDPR moves towards a legal framework that defaults to opt-out rather than opt in. The GDPR’s overall aim is to increase people’s control over their own personal data and ‘to protect all EU citizens from privacy and data breaches in an increasingly data-driven world’. 27 For companies and organisations this means obtaining consent for using and retaining customers’ personal data; for the ‘data subject’, it grants more rights to be informed and control how their personal data is used. Secondly and related, in May 2018, NHS Digital launched its national data opt out service in line with GDPR guidelines, aimed to provide ‘a facility for individuals to opt out from the use of their data for research or planning purposes.’ 28
While opt-out rights are at the heart of GDPR, health apps used in the UK are also part of the global capitalist data economy where such rights are diminishing. How then can we understand a European data protection law and (the limits of) its power in the context of global digital platforms and app companies, not to mention their (equally global) data aggregation? Similarly, what are the impacts and the limits of the NHS policies regarding opt-out, when they actively collaborate with private, commercial, third-party app providers who may abide by different corporate rules? Lastly and most pressingly, how is this complex and at times contradictory information communicated to the current or potential app users, or healthcare providers? These questions, arising from the digital–legal context of appisation in the UK, consequently informed the design and execution of our research.
NHS Digital: safety, trustworthiness, confusion
When we began working on our project, NHS Digital was gradually developing GDPR compliance by updating information and editing its pages, among them, the NHS Apps Library. Launched pre-GDPR in April 2017 to offer ‘
Within such a changeable site where information on app safety is both opaque and confusing, what are the possibilities of opting out (for example, if a user realises an app is not as ‘safe’ as it might appear)? The NHS Apps Library does not provide information or guidance about the possibilities of leaving the ‘unsafe’ (or the ‘safe’) apps, once they are installed and activated. The only place where opt-out is mentioned is outside of the Library, on the new National data opt out service by NHS Digital, which allows people to ‘opt out of their confidential patient information being used for research and planning’. The scheme replaces the previous ‘type 2 opt-out’, which required NHS Digital to refrain from sharing a patient’s confidential information for purposes beyond their direct care. Type 1 opt-out referred to requests for not sharing one’s information beyond direct care, placed directly with the GP and one’s local surgery.
Indexed under ‘Systems and Services’ and placed in a long, alphabetised list alongside other medical and administrative services within the NHS, information on opting out is very difficult to find and complicated to execute. Users must confront hurdles of clicking through multiple pages, downloading and emailing forms, or searching for alternatives. Such processes would require digital literacy, time, patience and perseverance – a striking contrast to how opting in is usually communicated in today’s digital environments, where ‘download’, ‘subscribe’ and ‘follow’ buttons are large, immediate and consistently visible.
More importantly, there are no links between the National data opt out service by NHS Digital and the Apps Library: it is unclear whether the NHS digital opt-out refers only to the information collected by GP surgeries and hospitals, or extends to the health apps.
As we can see, there is a worrying gap between the online NHS narrative of health app ‘safety’ and the reality of data safety. But in order to truly understand the sheer complexity, tensions and discrepancies between legal frameworks – such as GDPR or patients’ rights – and the reality of data sharing in health appisation beyond the discursive levels of data and medical ‘safety’, we need to analyse the actual apps, and their policies and practices more closely. Herein lies the value of our toolkit, which we offer as part of an assessment framework to be used alongside a more contextual analysis of the legal and practical conditions that surround health apps.
Methods
Our app assessment toolkit offers a way to evaluate, separately and in relation to each other, the following elements: (1) the apps’ Terms and Conditions as presented on their platforms/websites and within the apps; (2) the apps’ ‘permissions’ to access other data via one’s phone and the tracking of data beyond the app itself; and (3) the way an app handles an opt-out
Once chosen, these apps were then subject to the following steps over the course of 10 weeks:
Installation on a Samsung Galaxy running Android and an iPhone running iOS. Creation and active use of ‘dummy’ user profiles to replicate real life conditions as much as possible. Installation and adaption of the open source Exodus analytical tools for Android applications (an open source auditing platform with multiple applications, allowing detection of app/device behaviours that may be infringing user privacy through ads, tracking, analytics and listening tools).
36
For legal reasons, such tracking is not currently possible for iPhone iOS applications.
37
(At the time of writing, Apple’s proprietary iOS platform remains a locked digital rights management system, where under the United States DMCA 1201 Law, and European Article 6 of the EUCD, bypassing it constitutes a legal offence leading to imprisonment and a fine; see Doctorow
37
): the Exodus CLI client for local analysis, and Etip – the Exodus tracker investigation platform.
38
Analysis of network traffic on each app by using the Exodus proxy tool to record which external trackers or third-party servers each app communicated with when used (to determine what types of user data were being collected, and which external third-party servers the data was sent to).
39
Concurrent use of the open source Apktool
40
to investigate the source code and operating processes of each Android app, to further identify any known trackers (e.g. hidden Facebook and Google Analytics or advertising bots) that might exist in the source code of each app. Documenting each tracker or type of data requested, and comparison of the findings with Exodus data reports, paying particular attention to the third parties with which the data was shared, such as Google and Facebook. Deactivation of apps on both phones, noting the potential ‘opt-out’ options, the steps needed to terminate the account, and the relation between deactivation of the app and withdrawal of the data already collected.
Far from using the three apps as a ‘representative’ sample of a wide range of health apps, we approached them as a testing ground to build our analytical toolkit and related evaluation criteria, both discursive and technical. What we present in the following section is simultaneously an assessment of empirical evidence, and a demonstration of the benefits and uses of our toolkit for future research.
The toolkit
Assessing the apps’ Terms and Conditions
Since health apps operate in a field of conflicting legal and social frameworks with regards to data rights and safety, it is important to incorporate these frameworks when assessing the apps’ legal communication. For the first element of our toolkit, we integrated three distinct, complementary sets of guidelines:
Caldicott Principles used in the NHS for handling identifiable patient information,
41
which strictly regulate the process of collecting and storing such information, yet do not refer to how the process is communicated to the patients themselves. The Principles require: justification of the purpose(s) of collecting information; not using patient-identifiable information unless it is necessary; using the minimum necessary patient-identifiable information; limiting access to patient-identifiable information on a strict need-to-know basis; ensuring that everyone with access to patient-identifiable information is aware of their responsibilities; understanding and complying with the law; and understanding that the duty to share information can be as important as the duty to protect patient confidentiality. GDPR, which regulates digital data and places a much stronger emphasis on the clarity and transparency of communication to the users and introduces the choice of opting out. Our own framework for digital disengagement that places opt-out as a starting point of any assessment.
Combining the three, we propose that the following criteria should be used for assessing health apps’ Terms and Conditions:
What information is said to be collected? How is data stored? What is the app’s Privacy Policy? Can patients request to see their data? Is there an opt-out policy? What is included in it?
Applying our criteria to the three apps, we found that the information for each varied greatly in length, clarity and detail. For example, Echo and BH provided minimal explanation about which data was collected, how it was stored, and whether and with whom it might be shared: Echo only stated that they kept the information ‘confidential’, and BH stated that the information would be shared with GPs and private insurance providers. Neither offered an option to choose which parts of their data the user may decide to share (or not). PTBA, on the other hand, detailed how different types of data (identifiable versus anonymised) were stored and used, and offered an option for opting out of sharing the data with ‘affiliates’ and partners. The apps also differed substantially when it came to ‘Privacy Policy’: Echo was the only one offering information on both GDPR and Caldicott in its policy. BH and PTBC, on the other hand, stated that their data was shared with third parties who have their own policies. The responsibility here was placed on the user, who was expected to make decisions about information sharing at their own discretion. BH, in particular, called the users to carefully read those parties’ documentations: ‘These policies are the policies of our third-party service providers. As such we do not accept any responsibility or liability for these policies.’ 7
While the first three criteria refer to collecting, storing and sharing information
Mapping apps’ data sharing practices
While analysing the health apps’ Terms and Conditions reveals the sheer complexity and the conflicting nature of the legal and social frameworks within which health apps exist, it does not necessarily tell us
The meaning and purpose of ‘permission’ is described by Android developer documentation as a way to protect personal data when a user accesses an app on their smartphone. While there are many different levels and sub-levels of permission requests within Android applications, for the purposes of this project, we focused on the two most common protection levels that affect the sharing of user data with third-party apps: ‘normal’ and ‘dangerous’. ‘Normal’ permissions cover areas where an app needs to access data or resources outside of itself, ‘but where there is very little risk to the user's privacy or the operation of other apps. For example, permission to set the time zone is a normal permission’. 43 ‘Dangerous’ permissions require sensitive or private user information (such as phone and email contacts, or text messages), or access to phone features such as camera or automatic calling performed by the app without the user touching a dialler. To use a ‘dangerous’ permission, the app must request agreement from the user when it first runs: for example, by indicating that they accept the Terms and Conditions or Privacy Policy. ‘Dangerous’ permissions can also be ‘malicious’ – a term commonly used in the world of cybersecurity – meaning ‘exploitable’. These are permissions that are required for the app to work, and can remain dormant without ever being taken advantage of. However, if exploited by a cybercriminal, ‘dangerous' permissions make the user and their data most vulnerable.44,45
Assessing our selected apps in relation to the amount and type of ‘permissions’ they request demonstrates that even when Terms and Conditions present robust privacy policies, the apps themselves draw excessive additional information, beyond the actual purpose of the app for reasons which are neither clear nor justified. Many of these – for example users’ geo locations, phone calls, text messages or phone calls – access and share personal data in unclear and obscure ways that are ethically dubious and can compromise users’ privacy, human/citizen rights and potentially breach GDPR regulations, as summarised in Table 1 below.
How ‘dangerous’ permissions can be exploited for ’malicious’ use by cybercriminals.
To make matters worse, apps also contain a substantial amount of trackers, mining data about the way users use the app, and sharing data with third-party analytics. What is particularly important is to examine trackers and permissions in relation to each other, through for example, the ‘track the tracker’ technique that we implemented with the help of Exodus and Apktool. In order to communicate joint effects of tracking permissions and data sharing, and to facilitate a more comprehensive and nuanced analysis, we developed an interactive data visualisation tool. 46 This tool enables us to demonstrate the apps’ operation in the smartphones that host them, and reveals ways in which a single health app is embedded in the multi-platform network of data mining and sharing. Figures 1–4 below present screen shots of data output as tested on our three apps.

Babylon Health app: User trackers + app permission requests.

Echo app: User trackers + app permission requests.

Pregnancy Tracker & Baby App: User trackers + app permission requests.

Babylon Health, Echo and Pregnancy Tracker & Baby App apps: Collective user tracking and permissions requests.
Mapping and analysing permissions and tracking together demonstrate the importance of addressing not just the quantity but also the specific nature of data tracking. For example, PTBA seems to share the largest amount of data with the most trackers, while Echo has the fewest permissions, and as such, may seem less invasive. However, its permissions request for ‘Access [to] Coarse Location’ feeds one of the most ‘leaky’ combinations of app trackers in terms of the ways in which user data is documented, traded and shared across social networks, and how and when it is displayed. Like PTBA, Echo uses Facebook trackers, most specifically, Facebook Login (to share login data from the app with Facebook), Facebook Analytics (tracking anonymised, though context and category/theme-based data), Facebook Places (sharing location), and Facebook Share (ability to share health data to other contacts on Facebook). Among all the third-party trackers we documented, the seemingly unassuming tracker entitled ‘GoogleCrashlytics’ may be the most problematic one. It was used by Echo and BH. In addition to reporting to developers whenever an app crashes, or how many times it is used, GoogleCrashlytics allows developers to advertise across various hidden advertising networks, some of which are potentially untraceable beyond Facebook and Google adverts, but with whom sensitive data might be shared.
Passing app data to third parties, cross-referencing data with information from other apps on the phone and combining it with behaviour mining via social media analytics has extensive potential for indirect, yet substantial, intrusion into users’ privacy and confidentiality. One of the ways in which it happens is when sensitive health-related adverts are offered to users based on the app’s sharing of the information regarding GP appointments, medical topics discussed via the app, or prescriptions ordered. These adverts might be particularly unwelcome when they ‘follow’ the user outside the app, breaching what Nissenbaum 47 describes as the ‘contextual integrity’ of personalised and private data as it is shifted beyond its original (in our case, health-related) sphere of user practice and intention to become a means of digital surveillance. Such ads can pop up on social media, Google, in games or a general browsing, especially if the user’s geolocation presents an opportunity. For example, a user could be ordering a prescription for fertility medication while out shopping, and depending on their location, an advertising network might show them where a pregnancy test kit might be purchased without consideration of where and when a person might be accessing those (at home, work, in a public space, next to a family member), and what the implications of such out-of-app following might be. These potentially unwelcome exposures can affect anyone using a health app, but might be particularly problematic in instances of HIV-related, mental health, or fertility information, which is regarded by many as extremely personal, and as such, is often ferociously guarded. Sharing any health information to Facebook contacts poses similar, if not graver, dangers.
When developing and testing our toolkit on the three apps, we found a range of inappropriate data behaviours, from vague, limited and unclear information about the nature and purpose of data sharing – that is, would it be shared with GPs? The NHS? Medical research? Private non-medical companies? Which and for what purpose? – to potential direct violations of both the Caldicott and GDPR principles. As we have demonstrated, such behaviours appeared in various combinations in the three apps. Our aim, however, was not to determine which of the three did ‘better’ or ‘worse’, nor even to see whether the apps aligned with user expectations, as understanding the latter would require a separate study. Rather, our discussion points out that data violations is not a binary category of compliance/non-compliance, but a continuum. We propose that our toolkit might be particularly useful to inform further studies of this continuum, on which a larger number of studied apps could be placed.
Documenting the steps needed to delete an app and withdraw data
One might argue that data collected via apps’ permissions is given by users voluntarily; furthermore, it may seem that users have a degree of control by withholding certain permissions. However, this does not fully allow an opt-out of data aggregation. Although it is possible to restrict some permissions via the smartphone settings, when going back to the app, the user receives warnings that core features are no longer accessible, and restoration of permissions is required for the app to function properly. The other option is to discontinue/uninstall the app entirely. However, as both our evaluation of Terms and Conditions and the tracking of the apps’ sharing and storage capabilities suggest, removing the app does not necessarily stop the data aggregation and mining, or does so only partially. The final component of our toolkit is therefore dedicated to documenting the step-by-step process of opting out, according to the following criteria:
How easy it is to do so within the app? How long does the process take? What are the types of data one can or cannot opt out of?
Of the three apps, Echo offered the most clear and streamlined process, where the deletion of an account was accompanied by clearly labelled screens from within the app (see figures 5–6). Echo was also most clear and transparent about the time frame of the process, and the totality of deletion. After asking the user to confirm that they did indeed want to delete their data, Echo followed through by informing the user that within the GDPR advised ‘reasonable’ period for deleting user data’, it aimed to remove all the user’s data at their request, within 30 days.

Echo app: deleting the account.

Echo app: deleting the account.
The other two apps (BH and PTBA) were far less straightforward. PTBA, for example, did not provide an easy way for the user to opt out of sharing their data, despite having detailed (albeit contradictory) information on opt-out options in their Terms and Conditions. When attempting to delete the account, the user was directed to the ‘FAQ’ link, accessible via the ‘More’ menu; however the only opt-out option available was to remove the baby registered for development monitoring, as part of the app’s use. Instead of having the option to delete their account or opt out of sharing their data, users had to resort to: ‘10. My question about the app isn’t answered here. Where can I get more help?’, which guided them to contact the UK support services to discuss alternatives (Figure 7). 35 Similarly, despite having a clearly presented Privacy Policy, BH offered no direct way of opting out of data sharing, or deleting the account entirely from within the app. Again, one needed to go through a similar ‘Help’ option in the ‘Settings’ menu only to be redirected to the app’s website (Figure 8) for information on cancelling different subscription levels, and the account itself. Worryingly, as of 12 July 2018, despite the implementation of the GDPR, BH still incorporates the less thorough Data Protection Act 1998 in terms of handling requests for cancellation.

PTBA app: No direct opt-out from app.

Babylon app: Deleting the account.
In neither case was the time frame for the process clear, nor was there a guarantee of which data would be removed.
Our assessment of BH’s and PTBA’s opting out procedures suggests that these are made cumbersome-by-design, whether on a practical interface level or in terms of legal specificities. The labour and responsibility of finding a way out is placed on the user: in the case of BH, the user was required to navigate numerous ‘Help’ screens, and individually cancel each aspect of their membership/use/subscription to the different services within the app. In the case of PTBA there was no direct way of removing the data from within the app. Instead, the user had to read through lengthy pages of the Terms and Conditions and Privacy Policy that had been set out to cover both the app and the website communities used by users. Subsequently, the user had to fill in a form where the only choice for opting out of sharing one’s data was to choose the drop-down selection, ‘Data protection questions’, and then try to explain in a free text form that they would like to have their data removed. The user was also made responsible for looking into any third-party apps the PTBA software may have shared the data with, and extricate it accordingly. All these tasks require a high degree of technical knowledge and legal literacy, which most users do not possess. In other words, neither having rigorous Terms and Conditions nor providing a clear interface options are sufficient, on their own, to facilitate a straightforward and readily available opt-out process. While email support or a ‘delete your account’ button within an app may
Conclusion
Our analysis of health appisation in the UK reveals the currently heightened national concerns over personal data management within the context of GDPR and the wider changes in European data policies. One could argue that the move to become ‘GDPR-compliant’ is a step towards data justice and increased digital rights of users/patients, where opting out is becoming institutionalised. Yet a closer look at how such compliancy is being implemented – not just through everyday practices, but also at a more granular and algorithmic level of health apps – reveals a fluid and complex web of networked ‘data traps’ that act as resource nodes of biodata. These ever-changing and often contradictory settings in the contemporary UK–European socio-economic, legal and political landscape rely on their data subjects to be digitally engaged, or (un)willing participants in the process of the digitisation and appisation of health. Practices and technologies of opting out struggle to find a place within such a context that is simultaneously part of the global capitalist data economy where opt-out rights are diminishing and personal data is shared easily,
Emerging directly from conflicting civil, clinical and corporate frameworks surrounding personal data usage is a further tension that manifests very specifically within the area of digital health. The process of appisation, which is based on having an
Crucially, the broad and extensive sharing of our personal (bio)data for analytics and profit means that opting out, or the use, of health apps is first and foremost about large-scale dataisation and the economy of surveillance capitalism.
57
While data management practices are individual, and targeted advertising is also individually tailored, the individual is meaningless in the eyes of algorithmic determinism and prediction. A single user’s data has no representative value: monetary and statistic capital lies with the aggregated
Within such a context, ‘opting out’ is pulled across oppositional forces: it is a matter of individual rights (and responsibilities), while also, paradoxically, situated within a system that supports, and capitalises on,
First and foremost, we require a further, more extensive and more nuanced understanding, mapping and monitoring of the dataised operation of health apps. This is where we see the main methodological and practical impact and value of our proposed toolkit, offered here for further development and use by others, in both social research and healthcare provision. While we tested and developed our toolkit in the UK, its key principles can be rescaled and adjusted to use elsewhere. The power of our toolkit is twofold: it allows the identification of loopholes and ‘vulnerabilities by design’ within the apps, as well as fleshing out discrepancies between data policies and data practices. Only when we can ‘see’ the differences between the individualised discourse of health apps as they present themselves, and the inner workings of data extraction, can we begin to understand the complexities of digital health’s data traps. The open source platforms we used are widely available and in themselves are not new. But it is in using them in conjunction with the discursive analyses of the apps’ websites and their Terms and Conditions
Secondly, we must also rethink the idea of ‘data rights’ in digital health, by shifting towards the perspective of data
Footnotes
Acknowledgements
The authors would like to thank the Manchester Metropolitan University student intern Edward Johnson, for his dedicated assistance with collecting and overviewing data about the National data opt out service by NHS Digital. We are grateful to EJ Gonzalez-Polledo for their insightful comments on the earlier draft of the manuscript. We also thank the two reviewers for their helpful suggestions and guidance in preparing the final version of the manuscript.
Contributorship
AK and EM conceived the conceptual framework of the study. SM developed the protocol of assessing the apps’ data and designed the data visualisation tool. All authors reviewed and analysed different areas of findings according to expertise. AK and EM prepared the initial draft of the manuscript; all authors consequently edited and approved the final version of the manuscript.
Conflict of interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical approval
The Arts and Humanities Research Ethics and Governance Committee, Manchester Metropolitan University approved this study on 18 April 2018 (internal reference number: 0603).
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the British Academy/Leverhulme Small Grant (Grant no. SG170293).
Guarantor
AK
Peer review
This manuscript has been reviewed by Dr. Linnet Taylor, Tilburg University and Dr. Pepita, Hesselberth, Leiden University Centre for the Arts in Society.
