Abstract

Digital health technology in the UK
The digital health industry is a profitable and rapidly expanding industry valued at approximately £19 billion. 1 In 2017, it was estimated that over 320,000 mobile health apps were in regular use, 2 having been downloaded approximately 1.7 billion times. 3 The sheer rate of innovation and the scale of distribution make this market a challenge to regulate. Some measures estimate that 200 apps are added to app stores every day, 4 and of the total number of mobile health applications available, 50% have been released in the last three years alone. 5 These numbers do not include the proliferation of traditional and web-based software offerings, which greatly expand the breadth and scope of the problem.
The integration of so-called digital technology – the joint product of computer and data/information science – into UK healthcare at a national level is widely perceived as the way forward, both due to its promise of significant improvements to patient care and its potential to reduce healthcare costs.1,2 eHealth, as it is sometimes called, includes a number of similar but importantly distinct technologies such as clinical decision support systems, m-health (mobile health software), biometric sensing, telemedicine and electronic health/hospital records. According to a 2017 report published by the IQVIA Institute for Human Data Science, the use of digital health applications in just five patient populations (i.e. diabetes prevention, diabetes, asthma, cardiac rehabilitation and pulmonary rehabilitation) was projected to save the US healthcare system $7 billion per year. 4 Associated benefits notwithstanding, these technologies are coupled with notable risks when introduced into a clinical workspace.
Specifically, these risks can be divided into (i) the risk of physical harm to patients or software users and (ii) liability risks faced by organisations or stakeholders introducing the novel technology. Doctors who are employed and instructed by organisations or senior stakeholders implementing new technology are to a great extent shielded from liability risks, though they can often feel trapped in a complex web of distributed accountability. Doctors can also inadvertently expose themselves and their patients to both these risks when downloading and using software that has not been pre-approved for clinical use. This can result in threats to data protection, clinical accuracy, reliability and safety. Contributing to this complex web is the role of regulatory agencies and the rapid rate of innovation in the healthcare space. The role of software selection would be made simpler by strict regulatory guidelines, but agencies must balance the risks of low bars for entry with stifling innovation through harsher gatekeeping. The rapidity and fluidity of digital technology also makes it difficult to agree upon a one-size-fits-all approach for the industry as a whole, and a case-by-case approach lacks scalability, reproducibility and efficiency. In the following discussion, we briefly summarise current UK guidelines on the matter and then offer four criteria that can be used to delineate the appropriate regulatory categories.
Important definitions
The Care Quality Commission, the independent regulator of health and social care in England, has defined digital healthcare providers as: ‘Healthcare services that provide a regulated activity by an online means. This involves transmitting information by text, sound, images or other digital forms for the prevention, diagnosis or treatment of disease and to follow up patients’ treatment.’ 6 Providers with relevant products must demonstrate that the software in question meets the standards outlined in the Health and Social Care Act 2008. However, during the 2016/2017 inaugural round of regulation applied to online digital providers, the Care Quality Comission found only a minority (4 of 28) of providers were fully compliant with the approved standards, and 15 providers were censured due to a failure to meet the most basic standards. 7
Any digital platform that does not involve the transmission of data (e.g. offline applications), but still informs on the prevention, diagnosis, or treatment or follow-up of disease, while clearly in need of regulation, would not necessarily require regulation by the Care Quality Commission as a ‘digital healthcare provider’. In such cases, and for digital healthcare providers outside England (which do not require Care Quality Commission registration), regulation is instead covered by the Medicines and Healthcare products Regulatory Agency. The Medicines and Healthcare products Regulatory Agency is the chief authority within the UK that regulates medicines, medical devices and blood components for transfusion. Doctors must be aware that software can still be classed as a medical device by the Medicines and Healthcare products Regulatory Agency if certain conditions are met. The Medicines and Healthcare products Regulatory Agency defines a medical device as: an instrument, apparatus, appliance, material or other article, whether used alone or in combination, together with any software necessary for its proper application, which:
is intended by the manufacturer to be used for human beings for the purpose of
diagnosis, prevention, monitoring, treatment or alleviation of disease, diagnosis, monitoring, treatment, alleviation of or compensation for an injury or handicap, investigation, replacement or modification of the anatomy or of a physiological process, or control of conception; and does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means, even if it is assisted in its function by such means, and includes devices intended to administer a medicinal product or which incorporate as an integral part a substance which, if used separately, would be a medicinal product and which is liable to act upon the body with action ancillary to that of the device.’
8
While the combination of the Care Quality Commission and Medicines and Healthcare products Regulatory Agency regulatory standards broadly covers online and offline digital technologies, it is less clear that the above criteria are sufficient for separating so-called lifestyle software from ‘healthcare’ software that should be classed as medical devices and placed under the jurisdiction of the Medicines and Healthcare products Regulatory Agency.
Lifestyle and healthcare: distinctions and disambiguations
The appropriate level of regulation applied to medical software is determined by its status as a medical device, which currently relies on the Medicines and Healthcare products Regulatory Agency definitions as detailed above. However, while section (b) provides a clear definition which relates to direct physiological interventions, section (a.i) does not. Its overarching description of medical devices as ‘diagnosis, prevention, monitoring, treatment or alleviation of disease’ lacks specificity, making it difficult for regulators and ideal for opportunists. For example, if a hypothetical software records the weight and blood sugar entered by a user, then transmits and stores the information on the user's account online, this meets current definitions of a medical device, however does it require regulation? This exposes an important distinction in the regulation of software as a medical device, specifically, between the categories of ‘lifestyle’ and ‘healthcare’.
From our perspective, software used to collect or process a patient's medical data can come in at least two varieties which can be broadly referred to as ‘lifestyle’ and ‘healthcare’. The former may be thought of as ‘empowering consumers to make more and better decisions every day about their own health, monitor and manage chronic health conditions, or connect with medical professionals’ 3 while the latter may be thought of as ‘Enabling better and more efficient clinical practice and decision making through decision support software and technologies to assist in making diagnoses and developing treatment options; managing, storing, and sharing health records; and managing schedules and workflow’. 3
Though current guidance indicates that physicians should avoid using non-CE marked software to limit exposure to risks, it does not resolve the issue of which software should be regulated. Given the volume of health-related software on offer, it is logistically impossible for regulatory agencies to evaluate all the available software. However, distinctions do need to be made, and relevant criteria for establishing this distinction are vital. Moreover, how can the criteria account for the disruptive nature of the technologies themselves? Many digital technologies evolve or are updated so quickly that they outpace regulatory frameworks and can put enormous pressure on conventional regulatory agencies.
In the United States, this distinction is acknowledged by the 21st Century Cures Act, which states ‘lifestyle’-related software and digital technologies fall outside regulatory concern.3,12 As such, the Food and Drug Administration has developed a pilot programme whereby such applications, and those considered ‘low risk’, may proceed to market without regulatory review. 1 This approach has been driven by the need to ensure that the system sets out clear standards for developers to meet while keeping those standards flexible enough to meet the iterative nature of software development. 3 Although the Food and Drug Administration has taken the first tentative steps towards a framework that can account for this distinction, regulators in the UK can improve upon this model by further reinforcing the distinction and disambiguating the term ‘medical device’. The following section presents four criteria that we deem jointly necessary, but individually insufficient for determining the status of a piece of software as a ‘medical device’.
Lifestyle and healthcare: a four-dimensional approach
Below, we propose four factors that should be considered in concert for the successful regulation of digital health technology; none is sufficient on its own, but all are necessary points of evaluation.
1. Intended user
The role of the intended user in determining whether software constitutes a ‘medical device’ points to the role of accountability in the regulation of digital technologies. This criterion has the advantage of demarcating the contextually ‘who’ and ‘where’ intended by the developer. In the case of lifestyle software, the intended user is a lay individual who adopts personal responsibility for the use of the software, whereas in clinical situations, either administrators or doctors adopt responsibility for use of the software on behalf of the patient. The trust and deferred accountability should be addressed in the determination of whether or not a piece of software should be regulated.
This alone, however, is insufficient because it is not always clear whom the intended user is supposed to be, and for that, we must rely on what the developers' claim about their software.
2. Product claim(s)
What a developer or software producer says about their product can be telling, not only because it establishes the intended user, but also because it can demonstrate the software's intended use. Use is important because, like the user, it can point to either personal or clinical contexts; software designed to be used by the lay public often deserve far less scrutiny than their clinical counterparts because the claims made by the latter carry fewer medical consequences. Whereas lifestyle software is often designed to help members of the public improve their habits (and this may be based on sound or unsound scientific reasoning), the belief is that this software does not directly recommend medical intervention. On its own, however, this factor admits to blurred boundaries since what constitutes a medical intervention is open to some degree of interpretation. For example, something as ‘simple’ as nutritional advice can be of harmless, educational value to healthy individuals, but in gastroenterology, dietetics or to a patient with an eating disorder, it is used specifically as a controlled intervention.
3. Effect on health outcome(s)
This introduces another critical element to the conundrum: what does the device or software do with the data it collects? Does it merely provide a reference to existing knowledge (e.g. medical guidelines)? Does it just record and displays personal information for the user (e.g. blood sugar monitoring app)? Or does it process information to aid in the diagnosis or management of a disease (e.g. diagnostic or prognostic calculators)? In these cases, we might distinguish between software that merely organises data for viewing and software that processes data before viewing or transmission. If health data are being processed, the displayed results can very well be acted upon wrongly by inexperienced users or clinicians. In both cases, neither party may be aware precisely how the data have been manipulated, which can eventually lead to physical harm to the user and patient. This distinction can also be used to draw a line between websites delivering information and those that purport to deliver diagnoses; in the latter, information is processed and so the methodology should be made easily accessible for regulatory scrutiny. In addition, we suggest any software that processes clinical data and directly influence the management of disease should be evaluated, ideally through clinical trials and non-training datasets to evaluate the potentials of harm and benefit.
That being said, data presentation on its own is not value-neutral and can pose a risk even if data are not processed. Raw information presented to different users, particularly lay users, can have very different outcomes. For example, advising ‘rectal bleeding is a sign of colo-rectal cancer’, though true, can cause undue stress and anxiety to a patient with a known history of constipation and haemorrhoids.
4. Data source and destination
As per current regulatory frameworks, an important consideration is the source and use of the data. This is a defining feature of digital technologies, but what is less clear is how data should be weighed when considering the question of regulated healthcare technologies. Leaving aside issues of patient privacy – which clearly does and should affect the regulatory status of an application – should applications be classed based on the source of the data or the output of the data? One standard may be the source and target of the data in question: if data are gathered from a regulated device for use in a regulated device, the intermediate itself should be regulated. This may be thought of as preserving a chain of ‘regulatory provenance’. Relying on the notion of ‘garbage-in/garbage-out’, we might be suspicious of software that is regulated and appropriately marked, but is fed data by a system that is not regulated and whose accuracy is questionable. Suffice to say, that in the case of a sequence of interlinked medical devices, a failure in one can cascade towards poor patientcare and outcomes.
Planning for the future
It is clear that the traditional paradigm of medicine is shifting and the UK, as a country, needs to be better prepared for this change. The Topol Review to be completed by December 2018 aims to identify a strategy to ‘prepare the healthcare workforce to deliver the digital future’. 13 It is clear that government and British universities will play a central role in nurturing and training our workforce to critically evaluate new digital technologies. The current unmet needs may also necessitate the training of a new breed of ‘digital healthcare clinician-scientists’ for deployment within the NHS; these individuals would have the skills to guide digital healthcare policies, understand and develop software, treat computer bugs and viruses, and safely conduct large-scale clinical trials on digital healthcare technology.
Conclusion
Clearer distinctions and more specific guidelines are needed to effectively regulate the rapidly growing digital healthcare industry. We believe the four factors proposed herein will assist in the development of better regulatory frameworks in the future. In the meanwhile, to avoid unnecessary risks to themselves and their patients, doctors should exercise caution when using personally acquired software, making sure the nature of the digital technology is not in conflict with current guidance provided by the Care Quality Commission or the Medicines and Healthcare products Regulatory Agency. If in doubt, the links to both Care Quality Commission and Medicines and Healthcare products Regulatory Agency guidance are referenced herein with further contact details listed in their documents.
