Abstract
Artificially intelligent systems (AISs) are in development to aid the clinicians and patients of the United Kingdom’s National Health Service. We assess the statutory requirements for product liability claims against producers of defective AISs in clinical use and set out the criteria for bringing a successful claim against a producer to the courts of England and Wales. We argue that the mismatch between product liability and safety regulation leaves patients, and consumers more generally, without an adequate remedy for the consequences of AIS defects. We also discuss the intertwinement of the consumer Protection Act 1987 and the Medical Devices Regulations 2002. Recent developments such as United Kingdom’s withdrawal from the European Union and the updated Medicines and Medical Devices Act 2021 are discussed. In addition, we offer novel discussion regarding the tort of ‘breach of statutory duty’ as provided for by the Medicines and Medical Devices Act 2021.
Introduction
Clinicians have historically led the assessing, planning, development, delivery, and evaluation of healthcare; yet the practice of entirely human-powered deliberations in the clinical environment is being joined by the development of new technologies. Artificially intelligent systems (AISs) which are able to take information, process it, and dispense an output 1 are being developed with the intention of influencing the decision-making of both patients and clinicians at the point of care; the introduction of AISs at the bedside would constitute a step-change in clinical decision-making activities. 2 While not yet in widespread use in the United Kingdom’s National Health Service (NHS), there are examples of AISs being developed for use, including an AIS which assesses patients directly using ‘artificial intelligence triage’ to check if symptoms require a visit to a hospital’s accident and emergency department, 3 and an AIS designed to aid clinicians in diagnosing and referring patients with retinal disease. 4
While it is hoped that AISs will enhance clinical decision-making and improve patient care, there is the risk that harm may result from the use of AISs. As an example, Ross and Swetlitz’s investigation into IBM’s Watson for Oncology found that it had recommended a drug which was unsuitable for a particular patient group. 5 When such an AIS is used by specialist clinicians, they possess the knowledge to recognise and reject the erroneous AIS recommendation. When such an AIS is used by clinicians who are generalists rather than specialists (as Ross and Swetlitz found in one Mongolian hospital), 6 the user may lack the specialist knowledge to spot that a recommendation is erroneous before using it on a patient; such an event risks causing patient harm.
Patients have a reasonable expectation that the tools used to deliver their healthcare are safe and appropriate for use in a given context. Because of a risk of errors from AIS recommendations, there is value in being cautious in AIS development, deployment, and use. Currently, any AIS for use in clinical decision-making might be found to be highly accurate but nevertheless potentially fallible. As Bryson argues, ‘no one should trust Artificial Intelligence’ and human beings should be held to account when an AIS causes damage. 7 Therefore, while the aim of any intervention in healthcare is to serve the goal of helping patients, there is value in preparing for the possibility of patient harm occurring as a result of AIS use in clinical practice.
Aims
In this article, we build upon existing research which considers the means by which patients might seek compensation for harms eventuating from negligent clinical treatment on the basis of faulty advice from an AIS. 8 This remedy is available in the authors’ jurisdiction of England and Wales (the laws of Scotland and Northern Ireland substantially overlap with that of England and Wales, but for reasons of space shall not be considered in this article) and does not require evidence of fault, namely, liability for damage caused by defective products. We set out the criteria for bringing a successful claim against a producer and evaluate how far the statutory requirements permit claims for defective AIS in healthcare settings. An assessment is also made of statutory defences to product liability and the consequences for patient safety and clinical practice of other defensive strategies which a producer might adopt in order to avoid liability.
In making the case for reform of product liability, we revisit some the problems arising from an alternative route to compensation through fault-based schemes, such as breach of statutory duty and the tort of negligence. By comparing product liability to other potential remedies, we seek to determine whether the applicable liability frameworks generate incentives for producers to prioritise product safety and avoid patient harm.
In this article, we also consider the implications of recent legislative developments as they relate to the regulation of safety of medical devices. We raise probing questions about the direction of UK Government policy in this area and briefly outline how some of the objectives pursued in the new regulatory framework could collide with well-established ethical principles in the field of medicine. We propose that further research could address these normative concerns and thereby help to mitigate ethical dilemmas.
Product liability
To bring a conventional claim under the tort of negligence, the claimant must demonstrate that the damage occurred because of the conduct of the defendant. However, the claimant does not need to prove a fault element to succeed in a claim under a product liability action; it is sufficient to show that damage is present and that the damage was inflicted by defects in the product concerned.
The criteria for product liability claims are contained in Part I of the Consumer Protection Act 1987, 9 which implements the European Union (EU) Product Liability Directive. 10 The rules relating to product liability have not changed substantially as a result of the United Kingdom’s withdrawal from the EU, save that the power to amend product liability rules by subordinate legislation has been abolished. 11 This indicates that product liability remains a core component of consumer protection law in the United Kingdom, as primary legislation will need to be brought forward if any further amendments to the law in this area are to be introduced. Part I of the Consumer Protection Act 1987 gives the claimant the right to compensation from a producer if a defective product has caused personal injury, death, or property damage. 12
Nevertheless, it is unclear whether the statutory provisions can be interpreted to enable a patient to claim against a producer of a defective AIS which dispenses inaccurate information to a clinician involved in the patient’s care. Applicable case law has not directly engaged with this hypothetical scenario. 13 In the following paragraphs, we explore some of the problematic elements of product liability which could present obstacles to bringing a claim against a producer for defective AIS; namely, questioning if AISs are recognised as ‘products’ in law, the identification of ‘defects’ in AISs, the difficulty of identifying the level of safety that persons are ‘entitled to expect’ from the AIS in question, and the challenges of causation.
Artificial intelligence software as a ‘product’
Many products incorporating AI software are now available on the consumer healthcare products market. However, in clinical contexts, more sophisticated medical devices may be involved in patient care. Use of these devices is often intended to be solely under the direction of a trained clinician rather than direct use by a patient in the absence of clinical supervision.
Assuming the patient has the requisite legal standing, the next question is against whom may they bring a claim? According to the Consumer Protection Act 1987, the responding party is the ‘producer’ of the product, someone who holds themselves out as the producer or a domestic supplier who imports the defective product and places it on the UK market. 14 In our scenario, software development companies (SDCs) create AISs and may therefore represent a category of likely candidates for the defendant of a product liability action.
Relevant ‘products’ for the purposes of the legislation are defined as any ‘goods’, including electricity. 15 However, UK Government guidance published in 2001 states that product liability ‘is not intended to extend to pure information’. 16 In the United Kingdom, goods composed of ‘pure information’, such as computer software, qualify for product liability only when supplied as a component of a physical product. However, other European jurisdictions (e.g. Estonia, France) accept non-embedded software as products which attract protection under their national product liability regimes. 17 Schönberger argues that this disparity is a choice on the part of the legislators. 18 We believe that maintaining this anomaly is no longer tenable. Software packages are now routinely acquired separately and run on an array of compatible devices. Cloud computing takes this one step further, where the desired programme is accessed remotely over the Internet without requiring the software to be installed on a physical device.
In this section, we demonstrate that the exclusion of non-embedded software from the definition of ‘product’ in the Consumer Protection Act 1987 is inconsistent with more recent consumer protection legislation. Later in this article, we further argue that the mismatch between product liability and safety regulation leaves patients, and consumers more generally, without an adequate remedy for defects.
Comparison with the Consumer Rights Act 2015
The Consumer Rights Act 2015 consolidated and amended existing statutory provisions relating to consumers. It defines ‘goods’ as ‘any tangible movable items’, including metered electricity, gas, and water. 19 Thus far, this definition is consistent with the definition of ‘products’ in the Consumer Protection Act 1987. However, the Consumer Rights Act 2015 adds another category in respect of ‘digital content’, meaning ‘data which are produced and supplied in digital form’. 20
While the Consumer Protection Act governs product safety and the Consumer Rights Act is concerned with the economic rights of a consumer, safety is still considered in both statutes. Aspects of ‘quality’ identified in the Consumer Rights Act for both goods 21 and digital content 22 include that the items must be fit for the purpose for which it is supplied, free from minor defects and safe. The Consumer Rights Act acknowledges that there are differences between tangible goods and intangible digital content, nevertheless similar standards apply to both categories. However, the Consumer Rights Act only prevents traders from excluding negligence liability for death or personal injury, 23 but does not provide a route for claimants to bring an action against a manufacturer, such as an SDC, on a no-fault basis. Product liability offers an additional layer of protection as then a ‘manufacturer can be deemed responsible for any defect in the product which gives rise to personal injury or death’. 24 At present, because the Consumer Protection Act 1987 does not extend to products of ‘pure information’, harm resulting from AISs may fall outside the scope of product liability.
We therefore recommend that the statutory definition of ‘products’ in the Consumer Protection Act 1987 be expanded to include software packages, along the lines of the provision for ‘digital content’ in the Consumer Rights Act 2015. This amendment would bring much-needed clarity to the law and avoid the need for costly litigation. It would also resolve the present situation where various consumer protection doctrines are applied asymmetrically. Consistency requires that the rules concerning the consumer’s economic rights, safety regulation, and entitlement to compensation for harms mirror one another.
Defects
According to the Consumer Protection Act 1987, a product is ‘defective’ if the product’s safety is not as ‘persons generally are entitled to expect’. 25 Such expectations may be a result of how the product was marketed, any marks used (e.g. a safety mark or quality mark), and any warnings or instructions provided. 26 Each of these factors can influence user expectations. Similarly, the court may consider what ‘might reasonably be expected to be done with or in relation to the product’ suggesting that the manner in which the product is used ought also to conform to a reasonable standard of behaviour on the part of the user. 27
Reed claims that ‘products incorporating AI which have been tested and shown to perform better than their predecessor versions are unlikely to fall within the definition of “defective”’. 28 On first glance, this seems reasonable; however, AISs which claim to possess such improvements hold the possibility of creating unique problems of their own.
The rise and novel development of artificially intelligent products has resulted in the creation of a quality that makes particular groups of AISs desirable – their capacity for incremental learning. Yet, this quality may also prove to cause future AIS defects which may eventuate into patient harm. As an example, AIS decision-making may be determined by processes such as machine learning. Here, the AIS’s outputs may change over time as an iterative response to what it learns; thus, an output that an AIS may have dispensed for an individual patient yesterday might be different tomorrow as the AIS has had the opportunity today to further refine its decision-making processes when interacting with other patient cases. Where the recommendations that an AIS offers change frequently, a clinical user may find it challenging to ensure that the AIS that they are using is free of defects. For instance, a clinician may not be able to verify that the AIS output is appropriate for that individual patient, even if they are assured by the marketing that it is safe to use an AIS whose outputs change over time. While testing and certification may provide evidence to a clinician that an AIS is safe to use to make decisions regarding patient care, there remains the possibility that a novel output may be harmful if used. The clinician may well have behaved appropriately by inputting patient data correctly and considering the output before using it, but the output could be commonly subtly different each time and such differences may become usual to the clinical user in their experience of using the AIS. As a result, the clinical user may have acted reasonably when choosing to use an AIS that had previously adequately informed patient care, yet might not recognise that the AIS has become defective and dispensed a potentially harmful output, which they then use in patient care resulting in the ‘transmission’ of the defect from the AIS to the patient. Howells comments on such foreseeable risks of AI: it can be noted that a comparison may be drawn between AISs and ‘many other products such as pharmaceuticals and medical devices where trade-offs between efficacy and side-effects have to be made regularly’. 29 Yet, in the incidence of AIS output use resulting in harm ‘such a scenario should have been taken into account in the design if liability was to be avoided by programming the AI device to avoid dangerous choices’. 30 We hold that if it is foreseeable that an AIS could offer recommendations that may result in patient harm, then its release to the clinical domain ought to be withheld until such time that such potential harms can be neutralised.
Identifying what levels of safety people are entitled to expect is a notoriously complicated exercise. From the consumer’s perspective, they will naturally want the product to be perfect in all respects. However, the notion of ‘safety’ is variable; for example, lack of conformity to a voluntary industry standard does not necessarily mean that the product in question is unsafe, and therefore defective.
31
On the contrary, if a product or its use carries inherent risks, then the expected standard may be much higher. In
The use of AISs in patient care may likewise be deemed to carry inherent risks. Ultimately, the question as to whether errors in an AI system qualify as ‘defects’ for the purposes of product liability will turn on the parameters set on the expectations of safety. This may be heavily influenced by any user guide provided alongside the AI system. In
Recitals in the original Product Liability Directive indicate that the scope of liability is limited to consumers and thus, by implication, does not extend to products used in non-consumer contexts.
38
However, the text of the Consumer Protection Act 1987, as the implementing statute, does not define who qualifies as a claimant. This lack of definition allows scope for a patient’s claim to be heard even though they were neither the original purchaser nor operating the AIS in question. In determining what might be reasonably expected to be done with the product in the circumstance related to the existence of a defect, Hickinbottom J in
Therefore, a possibility arises whereby a producer could attempt to restrict potential liability towards patients through instructions or warnings that accompany an AIS. 40 Such instructions or warnings may make clear that an AIS intended for use as a medical device is not to be used without clinical supervision. However, this may have the effect that all legal responsibility for defects falls onto the shoulders of the expert clinical user. While this might be legally allowable, we condemn such a strategy on ethical grounds. No actor ought to be allowed access at the patient’s bedside, whether in the form of an AIS which that actor has designed or otherwise, if they are not prepared to take moral responsibility for their contributions to that patient’s harm. If the law does not provide an adequate route to obtaining compensation from producers of defective AIS, clinicians may find themselves defending lawsuits under other torts.
Causation
As with other torts, product liability requires proof of causation.
41
The Consumer Protection Act 1987 is silent on what criteria are relevant, so the normal causation doctrines apply as at common law. The court in
However, in
Defences
If the claimant succeeds in persuading the court that an AIS constitutes a product and demonstrates that the harm eventuated from a qualifying defect in the device, compensation may be available under product liability for our hypothetical patient. While a producer may not exclude liability for defective products by means of contractual clauses, 46 the defendant has a number of statutory defences available to them. 47 The defence that bears most relevance to the present discussion is the so-called ‘development risk defence’. 48
The ‘development risk’ defence
SDCs, in common with all producers, incur the risk that the design of their products may not mitigate fully against the risk of damage occurring if a defect materialises. Indeed, it might be argued that observable facts known about a product might not be exhaustive, either at the time of the products’ inception or at any other time. 49 For example, software is routinely supplied without any warranty that the code is bug-free because it would be misleading to claim that the package would function as intended in every possible setting or environment. As such, there is tacit acceptance of the fact that mistakes will be latent in an AIS, meaning that an SDC cannot guarantee that this defence will work in their favour. 50
To take advantage of this defence, the defendant must show that the ‘state of scientific and technical knowledge’ at the time meant that the defect was not discoverable.
51
According to
Johnston contends that the statutory definition of ‘defect’ as that which falls below the standard which ‘persons generally are entitled to expect’ is higher than it might be scientifically possible to achieve due to the subjective nature of consumer expectations. 56 Concurrent with this idea, Hodges argues that the consumer shares with the producer the unknown risks of technological innovations. 57 This argument goes against the grain of the strict liability regime where the producer’s economic interests are subordinate to consumer safety. 58 Therefore, when considering the deployment of an AIS to aid clinical decision-making, patient safety is not an illegitimate expectation.
The role of safety regulation
The regime introduced by the Consumer Protection Act 1987 as originally enacted was composed of two interrelated parts. Part I provides for the strict liability doctrine explored above. Complementary to this was a system of safety regulation in Part II of the 1987 Act, through the power to make regulations with respect to medical devices which is now to be found in the Medicines and Medical Devices Act 2021.
59
Both parts are intended to work in tandem. For instance, in
The asymmetrical application of these twin regimes exhibits a worrying inattentiveness to inconsistencies in the law that if left unresolved could lead to patients being unable to enforce product safety through the deterrent effect of strict liability. Safety regulations for medical devices expressly include software within their scope. 61 However, as we have demonstrated above, a claimant will struggle to enforce the applicable regulatory standards through the mechanism of product liability.
Schönberger argues that this discrepancy has its roots in the EU legislation from which the product liability and safety regulation provisions are derived. 62 Since its adoption in 1985, the Product Liability Directive has only been amended on one occasion. 63 This amendment included electricity as a product and removed the option for EU Member States to exclude agricultural products from the scope of the Directive. Meanwhile, Directives concerning medical devices have included software components from the outset. Initially, the inclusion of software as part of the safety regulation of medical devices came with the proviso that the software was ‘necessary for [the] proper application’ of the device. 64 This qualification has since been removed from the latest iteration of this legislation. 65 There is also recognition that software can be ‘devices in themselves’ independent of any hardware. 66
However, in UK jurisdictions, it is likely that the definitions used in the existing safety regulations are to become outdated. This is a consequence of the delayed implementation of EU Regulations relating to medical devices, 67 meaning that they fell outside of the scope of the transition period for the United Kingdom’s withdrawal from the EU, which ended on 31 December 2020. 68 For now, the UK Medicines and Healthcare products Regulatory Agency (MHRA) states that ‘existing regulatory requirements should continue to be met’ and that guidance will be provided in due course according to governmental decisions on the future of safety regulation for medical devices. 69 It is therefore likely that some regulatory divergence will occur, as the applicable domestic safety regulations are derived from the old EU Directives. 70
Despite the treatment of software as an integral component of medical devices for the purposes of safety regulation, software remains excluded from the definition of a product under product liability rules. Here again EU institutions appear to acknowledge the problem. The Expert Group on Liability and New Technologies asserts in its report to the European Commission that ‘strict liability of the producer should play a key role in indemnifying damage caused by defective products and their components, irrespective of whether they take a tangible or a digital form’. 71 Despite this and other reviews of the operation of product liability legislation, no proposals that would expressly include digital content as part of product liability have been put forward in the EU or the United Kingdom.
It seems unlikely that in the absence of recommendations for reform, the rules relating to product liability will be amended soon. Prior to the United Kingdom’s withdrawal from the EU, the Consumer Protection Act 1987 contained a provision that allowed for modifications to be made by means of subordinate legislation. 72 This was repealed upon the expiry of the transition period on 31 December 2020, 73 meaning that any further amendments will need to be made by primary legislation. For this reason, the Medicines and Medical Devices Bill was introduced to provide a means for the United Kingdom to diverge from the EU policy approach in this area. 74 The absence of provisions in the Medicines and Medical Devices Act 2021 to rectify the discrepancy between product liability rules and safety regulations is another missed opportunity. The main provisions relating to medical devices make no substantive changes to UK law, other than granting powers to government ministers to amend the existing transposed safety regulations in the future. 75 However, it must be acknowledged that the doctrines relating to product liability are generally applicable and not confined to the niche medical issues considered in this article. Any proposals to change the law should rightly be subject to full scrutiny.
Aside from the legislative developments described above, how does safety regulation fit within the overall framework of protection of the patient’s interests to avoid harm? McCartney argues that medical devices with an AI component should be robustly tested prior to clinical use and the legally binding safety regulations underpin this. 76 Such work is an opportunity for AISs to be designed with patient safety in mind, rather than to rely upon incremental developments through case law after harm has occurred. But who should monitor compliance with the applicable safety standards? In the United Kingdom, there are a number of regulatory bodies with an interest in the deployment of AI in patient care. 77 Each corresponds to separate issues in the use of AISs in the clinical environment. For example,
The MHRA manages conformity assessments which must be passed prior to placing the product on the market. 78
The National Institute for Health and Care Excellence (NICE) assesses the economic value of available treatments and medical technologies to determine if they ought to be deployed in hospitals and clinics operated under the NHS. 79
In England, the Care Quality Commission inspects healthcare settings to verify that they are being operated safely. 80 Other safety inspectors carry out similar assessments in Wales, Scotland, and Northern Ireland.
Regulators of the professions, such as the General Medical Council, ensure that clinicians follow ethical standards when choosing to use AISs in patient care. 81 For instance, the code of professional conduct for doctors requires prompt action to remove patients from risks posed by inadequate equipment. 82 Furthermore, doctors have an obligation to ‘report adverse incidents involving medical devices that put or have the potential to put the safety of a patient, or another person, at risk’. 83
This patchwork of regulators constitutes a liminal space – an in-between-ness 84 – which aligns with the United Kingdom’s current political consensus that matters pertaining to AI ought to be subject to sector-based regulation, rather than being subject to overarching oversight. 85 It seems that, despite the multiple mechanisms in place which would influence the development and final permission to use an AIS, it is the clinician who fills the liminal space between the regulatory assurances and an AIS’s use in clinical practice. Without clinical judgement, there is no final fail-safe assurance that the recommendation dispensed by the AIS is appropriate for the patient in question. The clinician’s role is key here as, despite what appears to be a relatively saturated regulatory environment, there have been major failings when it comes to holistic oversight of medicines and medical devices in the past. 86 To this end, the UK Government bowed to pressure in the House of Lords to establish a statutory Patient Safety Commissioner during the passage of the Medicines and Medical Devices Bill through Parliament. 87 The new Patient Safety Commissioner has no powers of enforcement of their own but by carrying out hearings and investigations into future patient safety incidents, the Commissioner can encourage the existing array of regulators to improve their respective standard-setting, monitoring, and enforcement practices. 88
We believe that stakeholders will benefit from a unified approach to regulatory oversight. If AISs are to be introduced for use at the patients’ bedside, we urge the Patient Safety Commissioner to commence an overarching review of the inconsistencies between product liability doctrines and safety standards imposed by regulation. As we have demonstrated above, strict liability for defective products is a valuable deterrent that incentivises safety in design from the outset, rather than externalizing liability for harms on unwitting clinicians or patients. The aim of such a review should be to ensure that patients have all possible legal avenues open to them so that in the long term, patient outcomes regarding the use of AI technology are improved and the risk of harm reduced.
Breach of statutory duty
Earlier in this article, we have demonstrated that under the current rules of product liability, patients harmed by AISs will struggle to bring successful claims. In the previous section, we have argued that the protection of patients as consumers of healthcare products may be better served by ensuring a consistent approach between adequate safety regulation on one hand, supported by strict liability enforcement of unsafe defective products by patients as claimants on the other hand. In this section, we consider whether the apparent liability gap we have identified is adequately addressed by allowing patients to sue for breach of statutory duty.
The general principle is that claimants can bring forth actions based on some eligible statutory duties but not all. Only those obligations which are contained in legislation for the purpose of protecting members of the public from specific dangers will suffice. This category of statutory duties is distinct from other obligations often contained in legislation, which although similarly drafted, are not intended to protect individuals from harm, but instead serve some other social or regulatory purpose. 89
The classic example of duties which the court has ordinarily held as offering public protection are those mandating the installation, maintenance, and proper use of fences and gates inhibiting access to the railway. 90 At the time that the applicable caselaw precedents were being decided, rail transport was an emergent technology with which the public were unfamiliar and could not reasonably be expected to adapt behaviour to mitigate against risks. The tort of negligence had not yet developed into the general tort it is today and without express provision to the contrary, the court was reluctant to make findings of tortious liability in novel situations. Fortunately, Parliament was alive to the potential for harm occurring en masse arising out of the inherent dangers of the railway and passed legislation imposing safety obligations on railway operators. Statutory duties therefore had the full force of law but were difficult to enforce and frequently disregarded. The tort of ‘breach of statutory duty’ emerged as a response to this enforcement gap.
We can extend this analogy to AISs in clinical settings. AISs are recent innovations, the risks of which are not yet fully known, nor it is often possible to predict the scope and extent of damage that may occur resulting from defects in the algorithms. Given that the regulatory obligations contained within the Medical Devices Regulations 2002 (as amended) have the clear objective of ensuring the safety of medical devices for use on humans, it would appear logical that these provisions are plainly for the protection of the public, who are the end users. Therefore, in England and Wales, a breach of these duties could be enforceable against SDCs, giving patients an alternative route to a remedy.
Were a patient to bring a claim for breach of statutory duty, they would need to prove that they are a member of the class which the legislation is intended to protect. Cases involving railways are again instructive here. 91 Furthermore, in common with claims pursued under the normal rules of negligence, the claimant must also show that they have suffered actual damage. 92 These principles are mechanisms to prevent unmeritorious or vexatious claims being made against duty holders. However, they should not be too onerous a hurdle for our hypothetical patient as it is logical that the safety standards imposed upon producers of medical devices are to mitigate against the risk of doing harm to patients, who are the end users of such products. Wherever a medical device, even one comprising artificially intelligent software, causes harm to a patient or otherwise interferes unlawfully with the patient’s bodily integrity (including misdiagnosis or inaccurate measurement), there is a strong moral argument for a remedy to be made available. Fortunately, for present purposes, the Medicines and Medical Devices Act 2021 explicitly states that a breach of the safety regulations ‘gives rise to a right of action for breach of statutory duty’, 93 mirroring the position with respect to the safety of non-medical goods. 94
Breach of statutory duty thereby provides an additional route to compensation for affected patients and merits consideration alongside product liability. The potential for claims for breaches of statutory duty could encourage SDCs to act carefully to avoid contravening their regulatory obligations.
The case for regulatory reform
One of the major advantages of product liability is that it is a strict liability regime, imposing less of an evidential burden upon claimants. Conversely, the very name of the tort of breach of statutory duty indicates that there is an element of fault on the part of the defendant in failing to execute their obligations under the law, which the claimant must prove in order to win their case. 95 Therefore, at first glance, the breach of statutory duty route does not appear to offer any substantial benefits to claimants above that of the broader doctrines of the tort of negligence. Here, we question whether fault-based schemes, such as breach of statutory duty or conventional negligence liability, are appropriate or even attractive to claimants, who might otherwise prefer the option of strict liability claims under the tort of product liability to be made available to them.
Product liability incentivises producers to maximise the safety features of their goods and attempt to avoid the risk of harm. As a strict liability approach, product liability (if reformed) could provide a strong signal to SDCs that they need to consider the safety aspects of their designs, as they would become potential defendants for any defective AIS they produce. Consequently, proposals for reform of product liability to extend its scope to ‘digital’ goods may be met with fierce resistance from SDCs. It is important for policymakers and regulators to be aware of the possibility of opposition from the AI community when developing recommendations for reform. The likely alternative, as Abbott suggests, is that an operator of an AIS in a clinical setting may be held liable in negligence. 96
If the latter route of negligence is available to claimants, clinicians who rely on a faulty AIS output might be found liable for breach of their duty of care as it is theoretically foreseeable that any AIS could contain defects. Yet, the foreseeability of the harm must be a real and not fanciful risk; indeed, clinicians would take steps to address such a risk by engaging their professional clinical knowledge and interrogating the worthiness of an AIS output prior to its use. However, if harm did occur, the clinician would be left with no other party to which they might transfer or apportion liability and thus be forced to shoulder the entirety of the patient’s loss. The authors have argued elsewhere that this scenario is unfair to clinicians. 97 Despite the SDC also being a participant in the harm caused by their defective AIS, clinicians are potentially positioned to deflect claims away from the SDC and would likely be forced to carry the liability burden for potentially defective AIS.
This is not the only problem in relying on negligence as a remedy against SDCs for defective AIS. Under the tort of negligence, Abbott argues, an AIS would be ‘held to the standard of a reasonable person’.
98
For example, if an AIS failed to diagnose a condition where a reasonable human clinician would have made an appropriate diagnosis, then the underperformance of the AIS would be evidence of a breach of the standard of care in negligence.
99
This would be consistent with the doctrine expressed by the court in the cases of
Yet, without reform, patients harmed by AIS can at present only rely on claims for negligence or for breach of statutory duty. Negligence is a broad tort and is therefore amenable to a variety of contexts but in this instance, we argue that negligence is overbroad in its scope because both the SDC and the clinician may be potential defendants. Breach of statutory duty could be a promising alternative; however, it is not readily adaptable to specific factual situations. Nevertheless, as we have suggested throughout this article, a suitable remedy exists in product liability. The fact that product liability is operated as a strict liability regime is not to say that it is an inflexible remedy. For example, the court has long indicated that when dealing with a product liability claim, it will consider a number of factors as contributing to evidence that a product fails to satisfy the legal test as to whether the product is defective. 104 The main barrier to patients pursuing a claim for defective AIS is the rules preventing software and digital goods such as artificial intelligence from being treated as eligible products. Future injustices could be resolved through anticipatory reform to bring AIS and other ‘software-only’ medical devices into the scope of product liability.
Comparatively, in his writings about the PIP breast implants crisis, Smart notes that the preference of a claimant would be to claim that the product was simply not fit for purpose under the Consumer Protection Act rather than having to prove the tort in a negligence claim. 105 Given that a strict liability regime may be more convenient for claimants, as well as incentivising defendants to comply with safety expectations, we strongly urge the Patient Safety Commissioner, the MHRA, and the Law Commission of England and Wales to incorporate strict liability for AIS defects in any proposals they lay before Parliament to update the rules on product liability as part of any post-Brexit reform packages. The twin protections of product liability and safety regulation, as originally contained together in the Consumer Protection Act 1987 Parts I and II, should not be permitted to diverge further apart. The regulatory independence brought about by Brexit is an opportunity to re-emphasise the link between these two pillars and update them for the digital age.
Conclusion
In this article, we discussed how product liability can be applied in the scenario where a clinician uses an erroneous AIS recommendation and the patient comes to harm as a result. We have identified barriers to the application of product liability in the context of AISs deployed in clinical environments. Turner reminds us that common law systems are occasionally prone to produce inconsistent doctrines because ‘hard cases make bad law’. 106 We are concerned that the legal responsibility for the use of AIS in clinical decision-making has the potential to be disproportionately placed onto the shoulders of the clinical user.
Nevertheless, we do not believe that these barriers are insurmountable, and our discussion has offered solutions. We have argued that it might be possible to successfully present that an AIS is recognised as a product and subject to product liability, thus those harmed by defective AISs ought not be barred from accessing this remedy. Furthermore, we have also shown that AISs are considered to be medical devices for the purposes of safety regulation, but developments in this area now outpace the inflexibility of the strict liability rules derived from the EU Product Liability Directive. Liability for breach of statutory duty on the basis of non-compliance with the Medical Devices Regulations 2002 may be one alternative route to compensation, but as it is a fault-based scheme, it shares the disadvantages inherent in negligence liability. As EU-derived laws are revised in the future, the UK Government has an opportunity to reform product liability. Carefully planned and well-considered reforms could help victims of defective AISs to obtain the compensation they morally deserve. It would be reassuring for all stakeholders if liability for harms that eventuate from defective AISs is allocated fairly and proportionally through effective legal remedies. We watch international legislative developments with interest and hope that policymakers in the United Kingdom will adopt statutory amendments which serve to protect patients and hold producers of defective AISs to account.
Footnotes
Acknowledgements
The authors would like to appreciatively acknowledge Katie McCay whose kind provision of notes regarding AI and product liability aided the initial drafting of this work. We are also grateful to the editorial team and anonymous reviewers of Medical Law International whose comments and suggestions greatly strengthened our work. Thank you all; your time, efforts, and support have made this publication possible.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Helen Smith is part-funded via the UKRI’s Trustworthy Autonomous Systems Node in Functionality under grant number EP/V026518/1.
1.
H. Smith, ‘Clinical AI: Opacity, Accountability, Responsibility and Liability’,
2.
Op. cit.
3.
4.
J. De Fauw, et al. ‘Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease’,
5.
6.
Op. cit.
7.
8.
H. Smith and K. Fotheringham, ‘Artificial Intelligence in Clinical Decision-Making: Rethinking Liability’,
9.
Consumer Protection Act 1987. C.43.
10.
Council Directive of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (85/374/EEC) (Product Liability Directive).
11.
Product Safety and Metrology etc. (Amendment etc.) (EU Exit) Regulations 2019, SI 2019/696, Sch 3, para 5.
12.
Consumer Protection Act 1987, s 3(1).
13.
European Commission.
14.
Consumer Protection Act 1987, c.43, s.2(2); see also definition of ‘producer’ in Consumer Protection Act 1987, s 1(2).
15.
Consumer Protection Act 1987, c.43, s1(2); see also definition of ‘goods’ in Consumer Protection Act 1987, c.43, s45.
16.
17.
European Commission.
18.
D. Schönberger, ‘Artificial Intelligence in Healthcare: A Critical Analysis of the Legal and Ethical Implications’,
19.
Consumer Rights Act 2015, c.15, s2(8).
20.
Consumer Rights Act 2015, c.15, s2(9).
21.
Consumer Rights Act 2015, c.15, s9(3).
22.
Consumer Rights Act 2015, c.15, s34(3)(c).
23.
Consumer Rights Act 2015, c.15, s65.
24.
A. Smart, ‘The Pip Crisis and the Protection of the Consumer in English Law’,
25.
Consumer Protection Act 1987, c.43, s3(1).
26.
Consumer Protection Act 1987, c.43, s3(2).
27.
Op. cit.
28.
29.
G. Howells, ‘Protecting Consumer Protection Values in the Fourth Industrial Revolution’,
30.
Op. cit.
31.
32.
33.
Product Liability Directive 85/374/EEC, article 6 (b).
34.
Product Liability Directive 85/374/EEC, article 7 (e).
35.
36.
37.
Ross and Swetlitz, ‘IBM Pitched Watson as a Revolution in Cancer Care. It’s Nowhere Close’.
38.
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.
39.
[2018] QB 627.
40.
Consumer Protection Act 1987, c.43, s3(2).
41.
Consumer Protection Act 1987, c.43, s2(1).
42.
43.
44.
BLM Law, ‘Consumer Protection Act 1987 – recent cases on causation’ (date unknown), available at https://www.blmlaw.com/?appid=2119&fileid=864 (accessed 4 August 2020).
45.
[2016] EWCA Civ 847.
46.
Consumer Protection Act 1987, c.43, s7.
47.
Consumer Protection Act 1987, c.43, s4.
48.
C. Hodges, ‘Development Risks: Unanswered Questions’,
49.
Op. cit.
50.
C. Pugh and M. Pilgerstorfer, ‘The Development Risk Defence – Knowledge, Discoverability and Creative Leaps’,
51.
Consumer Protection Act 1987, c.43, s4(1)(e).
52.
[2001] 3 All ER 289.
53.
54.
55.
Consumer Protection Act 1987, c.43, s3(2).
56.
C. Johnston, ‘A Personal (and Selective) Introduction to Product Liability Law’,
57.
Hodges, ‘Development Risks: Unanswered Questions’.
58.
M. Mildred and G. Howells, ‘Comment on “Development Risks: Unanswered Questions,”’
59.
Medicines and Medical Devices Act 2021, c.3, s15.
60.
61.
Medical Devices Regulations 2002, SI 2002/618, r 2(1).
62.
Schönberger, ‘Artificial Intelligence in Healthcare’.
63.
Directive 1999/34/EC of the European Parliament and of the Council of 10 May 1999 amending Council Directive 85/374/EEC on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.
64.
Article 1(2), Council Directive 90/385/EEC of 20 June 1990 on the approximation of the laws of the Member States relating to active implantable medical devices; Article 1(2), Council Directive 93/42/EEC of 14 June 1993 concerning medical devices; Article 1(2), Directive 98/79/EC of the European Parliament and of the Council of 27 October 1998 on in vitro diagnostic medical devices.
65.
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (Text with EEA relevance); Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (Text with EEA relevance).
66.
Op. cit.
67.
Regulation (EU) 2020/561 of the European Parliament and of the Council of 23 April 2020 amending Regulation (EU) 2017/745 on medical devices, as regards the dates of application of certain of its provisions (Text with EEA relevance).
68.
69.
Op. cit.
70.
Medical Devices Regulations 2002, SI 2002/618.
71.
Expert Group on Liability and New Technologies, ‘Liability for Artificial Intelligences and Other Emerging Digital Technologies’ (European Commission 2019).
72.
Consumer Protection Act 1987, s 8.
73.
Product Safety and Metrology etc. (Amendment etc.) (EU Exit) Regulations 2019, SI 2019/696, Sch 3, para 5.
74.
75.
Medicines and Medical Devices Act 2021, s 15.
76.
M. McCartney, ‘AI in Health Must be Strictly Tested’,
77.
78.
80.
Op. cit.
81.
NHSX.
82.
General Medical Council,
83.
General Medical Council,
84.
G. Laurie. ‘Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces in-Between?’,
85.
Science and Technology Committee,
86.
87.
HL Deb 17 November 2020, vol 807, col 657GC.
88.
Medicines and Medical Devices Act 2021, c.3, s 1.
89.
E.g.
90.
91.
92.
93.
Medicines and Medical Devices Act 2021, c.3, s 38(2).
94.
Consumer Protection Act 1987, c.43, s 41.
95.
96.
R. Abbott, ‘The Reasonable Computer: Disrupting the Paradigm of Tort Liability’,
97.
Smith and Fotheringham, ‘Artificial Intelligence in Clinical Decision-Making’.
98.
Abbott, ‘The Reasonable Computer’, pp. 1, 25.
99.
Op. cit.
100.
101.
102.
J. Turner,
103.
Op. cit.
104.
105.
Smart, ‘The Pip Crisis and the Protection of the Consumer in English Law’.
106.
Turner,
