Abstract
The use of artificial intelligence (AI) in healthcare may, notwithstanding its potential benefits, result in harm to patients from allegedly negligent acts or omissions by hospitals and medical doctors. In such circumstances, how should the principles in the tort of negligence (duty of care, breach, causation, remoteness of damage, and defences) respond to AI innovations in healthcare? In particular, how may the standard of care expected of hospitals and medical doctors be informed by regulatory guidelines? We refer to case law precedents and regulatory guidelines on the roles and responsibilities of doctors and hospitals as AI implementers. Importantly, they prompt further reflection and consideration as to how regulatory guidelines can impact the application of judge-made principles in negligence in connection with, for example, the reliance on medical AI in clinical practice, the disclosure of AI usage and risks to patients and the challenges posed by the opacity and non-explainability of medical AI.
Introduction
Technological developments in artificial intelligence (AI) – the capacity of algorithms and computer systems to carry out tasks that typically require human intelligence – are poised to exert an increasingly significant influence on the medical field. Whether through the use of physical devices, robotics or clinical decision support systems based on machine learning (ML), 1 AI applications have been wide-ranging extending to medical diagnostics, making predictions of diseases, and detecting physical and mental disorders such as depression and bipolar disorders. Conversational AI systems provide feedback on the delivery of psychotherapy and have the potential to alleviate the shortage of clinical psychiatrists and psychologists to conduct therapies. 2 Generative AI, a type of AI that is capable of generating content from multiple data sources including electronic health records with support from large language models, can aid in the interpretation of patient data. 3 In addition, AI enables the application of personalised and precision medicine for the benefit of patients, supports the development of new drugs and vaccines 4 and enhances clinical workflow systems.
In clinical practice, AI can mimic human capabilities and perform tasks which surpass those of human doctors in terms of speed or accuracy. 5 Notwithstanding its superior performance, AI may commit basic mistakes which a reasonably competent medical professional would not countenance, 6 and this may arise from the different features that human experts versus AI systems focus on when conducting specific clinical analysis such as medical diagnosis. 7 Even medical AI, which is significantly more accurate or reliable than human doctors, is not free from errors. Diagnostic and treatment errors are potentially risky and may, when the risks materialise, cause patients to suffer serious physical injuries or even death.
Similar to novel medical technologies, it takes time for medical doctors and hospitals to come to grips with the use of AI in healthcare and for clinical practices involving medical AI to develop and evolve. Medical doctors may have differing levels of knowledge about novel medical AI and divergent views as to the appropriate use of medical AI in clinical practice. Unlike other medical technologies, the use of AI presents certain unique challenges not least the capacity of the technology to learn and adapt as it is being used and applied in clinical practice. Furthermore, how the AI model will learn and adapt and generate recommendations and outputs may not be entirely transparent and intuitive to the implementers. Thus, the problem of AI opacity makes it difficult for hospitals and medical doctors to assess and predict how medical AI will respond in particular healthcare situations affecting patients.
Patients who are harmed by the negligent acts or omissions of doctors and hospitals in implementing AI have recourse under the tort of negligence to monetary compensation. In order to recover damages, the patient would have to show that the doctor or hospital owed the former a legal duty of care, breached the duty, that the breach caused the injuries and the injuries were reasonably foreseeable (i.e. not remote). 8 Doctors and hospitals would owe a duty of care to their patients in implementing medical AI to support or facilitate their provision of medical services to patients. Being fault-based, the tort of negligence behoves the doctor and hospital to take reasonable care to prevent or avoid risks of harm to patients. The standard of care is to be assessed at the time of the incident without the benefit of hindsight as to how the technology would have evolved subsequent to the alleged breach.
This article examines the challenges in applying tort of negligence principles relating to duty of care, breach, proof of damage and defences in view of AI innovations in healthcare for AI implementers (hospitals and doctors) in the delivery of medical services. We will analyse among others the different legal approaches to determining their negligent conduct in diagnosis and treatment as well as medical advice in the implementation and use of AI in clinical practice and causal responsibility. The unique characteristics of AI technology and the significant potential of medical AI applications across the broad range of contexts of use raise intriguing questions. What are the circumstances in which hospitals and doctors ought to rely on (or reject) the use of medical AI in decision-making? How should they respond to medical AI which are continually learning and adapting to the different environments and contexts of use? Should they disclose to patients the use of medical AI? To what extent should the use of medical AI or the outputs be explainable to patients? Where should causal responsibility lie for patient injuries emanating from AI errors? We will address in this article, the difficult challenges posed by the emerging AI technology to the application of (including potential modifications and adaptations to) the negligence principles as well as the related question of the impact of the regulatory guidelines on the development of the tort of negligence to better respond to the abovementioned challenges.
The arguments and analysis presented in this article would be relevant to lawyers who advise patients, doctors, and hospitals on their potential rights and liabilities arising from AI usage, the regulators concerning the potential impact of their guidelines whether direct or indirect on the law of negligence and finally, judges on the influence of medical AI in shaping the development of the legal principles relating to personal injuries claims in medical negligence. The focus will be the regulatory guidelines and common law principles of negligence in Singapore with reference to relevant guidelines and principles from other common law jurisdictions including Australia, Canada, the United Kingdom, and the United States.
Regulatory framework of AI in healthcare: potential impact of regulatory guidelines on negligence principles
The application of negligence principles to the use of medical AI in the clinical context should be properly situated and analysed within the macro lego-regulatory system. The latter system, consisting of a series of legislation, regulations, guidelines and best practices, regulates the development, commercialization, and usage of medical devices and software. It operates in tandem with the tort of negligence with its principles of civil liability and compensation for injuries that have arisen from such commercialisation and usage. Moreover, as we will discuss below, regulatory guidelines can also provide the background and context to the analysis and development of negligence principles relating to the standard of care and duty of care.
First and foremost, many developed jurisdictions address the regulation of medical devices including AI-driven medical devices through statutes, implementing regulations and guidelines. Despite the plethora of legislation and regulations in Australia, 9 Canada, 10 the European Union (EU), 11 Singapore, 12 and the United States, 13 there are some basic similarities. Medical devices are typically categorised into different classes based on levels of risks and may be subject to product registration requirements. The regulatory regime that is applied ex ante to such medical AI devices aims to reduce the risks of errors or harms prior to commercial usage but cannot operate as a complete shield against the risks of injuries to patients. An ex post negligence liability regime is nevertheless required to address the claims for compensation from injured patients 14 arising from the use of medical AI. In negligence lawsuits, would the defendants be entitled to rely on their adherence to the lego-regulatory system relating to medical devices? Would their decision to implement or use medical AI devices that have been approved by the regulators for commercial usage be regarded as reasonable?
Layering upon the statutes and implementing regulations, there may be a range of regulatory guidelines pertaining to the use of AI in healthcare. Two jurisdictional examples will suffice here. In Singapore, the Regulatory Guidelines for Software Medical Devices – A Life Cycle Approach (April 2020) covers product registration for AI-MD and validation processes for its performance, taking into account continuous learning activities.
15
These guidelines are targeted at AI developers and suppliers. Subsequently, a set of draft Regulatory Guidelines for Classification of Standalone Medical Mobile Applications (SaMD) and Qualification of Clinical Decision Support Software (CDSS) was issued in 2021 to solicit public views and consultation. Both these documents clarified that they were
In the United States, the US Food and Drug Administration (FDA), cognisant of the ability of AI/ML in software to learn from its interactions with the real-world environment, issued in January 2021 the ‘Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan’. 16 The Action Plan envisages the continued monitoring of the software by manufacturers and regulators during premarket development and through to its postmarket performance while keeping a watchful eye on patient safety. 17 It also aimed to progress on a few fronts: providing guidance on the Predetermined Change Control Plan (i.e. on the potential modifications from software learning via exposure to the environment), developing Good ML Practice (i.e. a set of data management practices), and adopting a patient-centred approach to enhance transparency to users.
The range of guidelines on AI use in healthcare may take the form of descriptive guidance or reporting frameworks. 18 The guidelines may cover the evaluation of AI usage at particular stages of the clinical process (e.g. pre-clinical development or live clinical evaluation or prospective evaluation) or specific study designs (e.g. randomised controlled trials, prediction model evaluation). 19
Globally, the World Health Organization (WHO) has recommended a set of minimum standards to be adhered to in respect of AI-based medical devices in areas such as model development, external validation, data management, and clinical impact evaluation. 20 The guidelines are applied to the use-case of cervical cancer screening as part of WHO’s strategy to eliminate the disease that afflicts women worldwide. While they include guidelines targeted at developers and implementers, 21 the International Telecommunication Union (ITU) guidelines are focused on manufacturers and regulators. 22 There are also best practices and guides from domestic regulators, 23 professional, 24 industry-specific guidelines, and codes of practices 25 as well as government agency guidelines from different jurisdictions 26 and even jointly issued guidelines from two or more national regulators. 27
What might be the potential (and nuanced) impact of regulatory instruments and guidelines on the tort of negligence? First, it is recognised at common law that guidelines issued by international bodies may be relevant to determining the standard of care in negligence. 28 Specific to the healthcare domain, clinical guidelines reliant on evidence-based medicine and randomised controlled trials are also referred to for assessing breach of duty in medical negligence litigation. 29 Furthermore, statutory duties imposed on an entity or person, for example, by way of regulations having the force of law, may result in the establishment of a common law duty of care in negligence. In these cases, fault-based principles of standard of care will apply. That said, when a common law duty arises from a statutory framework, the scope of the common law duty of care would not necessarily be concomitant with that of the statutory duty. 30
The standards laid down in codes of practices by competent authorities can sometimes be relied upon by specified groups for the purpose of determining negligence liability. 31 The position and views of regulators will likely be relevant but are not always conclusive as to the legal standard of care expected which is ultimately a matter reserved for the court’s decision. 32
Where an entity or person did not follow industry practice (e.g., in taking the usual precautions against potential harm to the plaintiff), that omission may amount to evidence giving rise to an inference of negligence. 33 There is, however, no legal obligation to carry out or perform services over and beyond the prevailing industry standards set by the competent authorities unless these standards have been found to be manifestly inadequate. 34 It has been argued that government agency guidelines can be analogised to industry standards but these are not determinative of the negligence standard. 35
The final observation relates to the legal consequences if any for deviations or non-compliance with the regulatory guidelines. Certain guidelines refer to disciplinary sanctions for failure to comply with the stated regulations or legal liabilities for injuries caused. 36 Others may specifically state that they are not legally binding and hence voluntary.
Based on the above discussion, the contents of the guidelines are relevant to the analysis of judge-made principles in negligence. As we will argue below, AI-related industry guidelines on the specific contexts of use of medical AI that are in line with negligence principles and which are generally accepted by the industry should constitute an important source of evidence underlying the evolving common law standards over time.
Tort of negligence and injuries to patients from AI errors: potential liabilities of AI implementers
While it is acknowledged that medical AI can generate significant benefits for patients with the advantages of speed and efficiency in processing voluminous data, errors resulting in harmful effects on patients can occur from its use whether in respect of AI-driven clinical decision support systems 37 in diagnosis and predictions, treatments including robotic surgeries 38 or the use of health apps. 39
The two main questions that we will address are (1) how should negligence principles apply to the novel technology of AI in healthcare; and (2) whether and, if so, how the principles, particularly the fault-based standard of care, may be influenced by regulatory guidelines on AI use in healthcare? The focus is on the potential liabilities of AI implementers in the tort of negligence. As discussed below, there will be challenges (though not insurmountable) in applying negligence law to AI implementers given the possibility of divergences of views of medical peers as to the prevailing medical practice with respect to the evolving technology. Moreover, we will encounter problems posed by the use of opaque medical AI with unpredictable outcomes that are dependent on its complex interactions with humans and the particular contexts of use. 40
Medical doctors generally owe a direct legal duty of care to their patients in respect of personal injury claims based on their professional and contractual relationship as well as the proximity of their relationship. 41 There are usually no countervailing policy considerations against such duty of care.
Such duty owed to patients should remain intact notwithstanding the use of AI technology in the delivery of medical services where the human doctor retains the decision-making power whether to accept or reject the AI recommendations or predictions. If the human doctor is ‘in the loop’, the direct relationship between the human doctor and the patients to whom the doctor provides medical services is maintained. In this regard, the human doctor is utilising medical AI as an aid, facilitator, or support in the course of delivering medical services to the patients. The situation is similar to a case where the human doctor uses sophisticated technology and equipment to conduct scans on the patient to aid in the diagnosis of the patient’s condition in order to decide on the appropriate treatments. The analysis of duty of care should not change merely because the human doctor generally relies in the ordinary course on the AI (or for that matter, the scanning technology) in the provisioning of medical services to the patient. Whether the human doctor should defer to (or reject) the medical AI recommendations in diagnosis or treatment is a question of standard of care (or breach of duty) which we will discuss in the next section.
When a human doctor or hospital undertakes responsibility for the care, supervision, and control of the patient, the duty is non-delegable. 42 Conversely, if the hospital only agrees to perform diagnostic services for a party who was not a patient and the function was delegated to a third-party laboratory to carry out the testing, the hospital does not owe a non-delegable duty to that party. 43 The determination of a non-delegable duty that is owed personally and directly to the patient rests principally on the existence of a hospital–patient relationship. Where it is established that a hospital or clinic owes a duty of care, custody, or supervision over a patient, they remain legally responsible to take reasonable care to prevent harm to their patients within the scope of the alleged non-delegable duty even if they may have outsourced some aspect of the care, custody, or supervision to a third party who negligently performed the AI analysis.
If, in addition to duties of care, custody, and supervision to a patient, the hospital also partakes in the development of medical AI with other parties, a question may be raised about the extent of duties assumed by the hospital. The duties of an AI developer may include designing, building, and testing the AI system/model that are distinct from the usage of medical AI. For example, a hospital may owe a duty in respect of their reliance on AI recommendations in clinical practice but not for their limited contribution to the building of certain aspects of the AI model in conjunction with multiple AI developers. In this situation, the type of AI error that eventually results in injuries to patients, apart from the analysis of breaches of duty, would be crucial in determining the liability, if any, of the hospital. In this regard, it would be prudent to document clearly (e.g. in service-level agreements) the specific responsibilities that hospitals undertake in respect of the medical AI they have developed and/or implemented for the purpose of delivery of medical services to patients.
Standard of care of AI implementers: whether to implement or rely on medical AI, and the monitoring and oversight of AI usage
The more contentious legal issues are whether the doctor has breached the duty by providing diagnosis, treatments, post-surgery monitoring or advice falling below what is expected of a reasonable competent doctor, and in establishing if the breach had indeed caused the patient’s injuries or death.
If the doctor who is adjudged to be liable in negligence in the use of medical AI was acting as an employee of the hospital, the hospital may have to compensate the injured patient on the basis of vicarious liability 44 even if it were not at fault. As mentioned above, hospital liability can also arise under the doctrine of non-delegable duties 45 towards patients under their custody and care even if an integral function within the hospital’s scope of duty (such as the diagnosis and treatment of the patient) has been delegated to an independent contractor which failed to take reasonable care in discharging the delegated function.
In addition to attributive liability discussed above, the hospital may be directly liable in negligence for any wrongful decision to implement the medical AI. Consider the following scenario: An AI development company has approached hospital XYZ offering a medical AI for diagnosing a type of cancer based on image analysis. The medical AI was trained on data collated from studies conducted in two Western countries and the developer intends to train the model on data obtained from studies on Asian populations. It is based on a deep learning system involving interconnected layers of data with different weights attached. The AI is capable of generating outputs indicating either the presence of the specific type of cancer (positive) or its absence (negative) and the level of confidence associated with each output. The doctor’s decision to accept or reject the AI output with respect to diagnosis or treatment for particular patients can be assessed from the lens of legal liability.
In our discussion below on potential negligence liabilities of hospitals and medical doctors, we will examine one set of guidelines from Singapore namely the Artificial Intelligence in Healthcare Guidelines (October 2021)
The AIHGle contain recommendations related to AI-MD for AI developers as well as for AI implementers (e.g. hospitals and doctors). The developers design, build, and test the AI systems while the implementers make use of medical AI, monitor its use and conduct reviews periodically. The AIHGle state that the use of AI-MD does not change ‘the liability of the implementing institution or the individual medical professional in their provision of appropriate and safe care’. 48 This suggests that the existing liabilities of hospitals and medical doctors in the delivery of medical services remain, notwithstanding the use of medical AI. This statement itself is uncontroversial. What it does not specifically mention (but which we will explore in this section) is how the principles governing legal liabilities of hospitals and medical doctors can accommodate the use of medical AI in clinical practice, or whether these principles have to be modified in order to take account of the manner in which medical AI can impact on the delivery of medical services.
First and foremost, the hospital has to decide whether to purchase and implement the medical AI. There may be a range of considerations for implementing AI-MD,
49
such as the intended use and purpose of the AI-MD, its efficacy, the safety, and quality of care provided, the representativeness of the training datasets, the existing regulatory approvals for the AI-MD, the risks of implementation and mitigating measures. Where the hospital or doctor implemented the medical AI contrary to its intended purpose, disregarded the safety protocols or the datasets were non-representative,
Assume the scenario that the hospital has made the decision to implement medical AI for service delivery. With regard to a particular patient, the AI model generates an output recommending stage 4 cancer based on image analysis. Would the doctor be regarded as negligent if he relied on the AI output which turned out to be wrong? On what bases did the doctor accept the AI output? Case precedents that support justified reliance on medical technology based on the features of the technology 50 may be applied to medical AI. In one Singapore case, the court held that product distributors should not place unquestioning reliance on the approvals given by authorities when there is some suspicion concerning the product. 51 Similarly, the fact that the medical AI has been approved by the HSA for commercial use does not necessarily justify reliance by the hospital on the medical AI for clinical practice without further checks and verifications. Hence, beyond mere approvals by the authorities, the doctor should examine evidence indicating the accuracy and reliability of the AI technology being used for diagnosing cancers and be satisfied there are no material doubts or suspicions regarding the proper functioning of the medical AI or the accuracy of the AI outputs.
The converse is also true. The doctor may be found negligent for the omission to rely on or use medical AI which has been found to be accurate, affordable, and superior in performance and efficacy to human doctors. This may amount to an unreasonable omission to adopt or access available technology. 52 Applying existing precedents, establishing negligence would depend in part on whether there is sufficient evidence of a general practice for such usage. 53 Such evidence cannot be taken for granted especially in the case of novel technology such as AI.
Human implementers may be subject to automation bias, that is, the tendency to favour machine-generated outputs either by disregarding the errors of AI (omission errors) or to accept the AI decisions despite evidence to the contrary (commission errors). 54 The level of automation bias may be influenced by the user’s confidence in his own diagnosis or decision-making, trust in the decision support system, past experiences with the DSS, and the complexity and volume of the tasks at hand. 55
The decision whether to implement medical AI and the scope of implementation within a hospital may also be affected by medical insurance coverage. The insurance policy may allow reimbursement for claims from patients regardless of whether medical AI was used or not, or limit to claims arising from cases only when the medical doctor adheres to the recommendations of the AI. Another factor relevant to the implementation decision-making might be the medical doctor’s concern for his or her reputation in diagnostic expertise, and not merely for patient well-being. In situations of uncertainty about the diagnosis of patients, studies have shown that medical practitioners may underutilise AI diagnostic tools due to the influence of the peers’ perception of their diagnostic ability even if the AI tool is more accurate in diagnosis than the human doctor. 56
Apart from making a judicious implementation decision, AI implementers would likely be expected to monitor the performance 57 of AI-MD, respond to adverse events 58 and to conduct reviews in the event of errors or to ensure its ‘clinical relevance’. 59 In addition, AI implementers should track the AI-MD at the point of deployment (i.e., ‘ground-truthing’) to determine the ‘deployment baseline’ so as to assess whether there are deviations from the intended performance at deployment. The deployment baseline should not fall below the current clinical practice baseline resulting in patients being ‘worse off’. 60 These guidelines on monitoring go towards ensuring that the performance of medical AI continues to keep pace with technological and medical developments.
Drawing upon recommendations from manufacturers in the user manuals, AI implementers should maintain oversight over the AI-MD based on the intended use, workflows of the AI-MD and the clinical context. 61 For AI-MDs that are intended to be used alongside healthcare professionals, the AIHGle remind the implementers to ensure that its staff are trained to operate and interpret results from AI-MD. 62
Importantly, the guidelines anticipate that medical AI can ‘continuously learn and adapt during its deployment’. In this regard, the AI implementers are ‘encouraged to consider’ additional safeguards including utilising locked AI algorithms when in actual deployment, applying the learning post-deployment upon testing and validation, 63 and ensuring the quality, safety, and efficacy of AI-MD during deployment. 64
As can be seen from the examples above, with respect to the standard of care expected of hospitals and doctors in situations involving new technologies, regulatory guidelines such as the AIHGle can shed light on the different contexts of use of medical AI in healthcare and the industry expectations of AI implementers. The AIHGle, described as a ‘living’ document, will be ‘periodically updated to incorporate good practices in the rapidly developing AI landscape’. 65 Moreover, the fact that these guidelines are not legally binding allow the regulators greater flexibility to implement or adapt them in tandem with technological advances. 66 This does not mean that regulatory guidelines will always capture accurately the contexts of use and industry expectations which are inevitably dynamic in nature. In order for the guidelines to have any impact on the law of negligence, they would have to be first accepted and adapted for use within the schema of the tort of negligence, and in particular, the standard of care.
Any judicial uncertainties relating to ascertaining the standard of care may be tempered by negligence rules that can adapt to new situations through the extension of existing rules, drawing analogies with existing precedents and recourse to professional codes and practices. 67 At the same time, we should be alert to situations where the legal principles, even if they are extended or analogised, would not adequately address the fundamental nature of the technology. A case in point might be medical AI that will continuously learn as it is applied to different environments with sometimes unpredictable outcomes. We will encounter this issue again below in connection with negligence principles applicable to medical diagnosis and treatment and the giving of medical advice by doctors and hospitals to patients.
Medical diagnosis and treatment
AI errors may result in missed diagnoses of diseases (false negatives), unnecessary treatments due to incorrect diagnosis that a person has a disease (false positives), and inappropriate interventions or treatments due to incorrect diagnosis. 68 Errors that have arisen in clinical practice include mistakes in scanning (e.g. ultrasound), dataset shift (from the statistical distribution of the original dataset used in training the AI model to the statistical distribution of the dataset applied in clinical practice), and the inability of the algorithms to adapt to unexpected features in the environment and the misuse of AI algorithms. 69
In assessing the legal standard of care of a reasonably competent doctor, the twin tests in the UK decisions in what he or she did conformed with a practice that was in existence at the time the medical service was provided’ and to ‘establish that that practice was widely, although not necessarily universally, accepted by peer professional opinion as competent professional practice.
73
At common law, the
Consider another scenario where the doctor chooses to rely on AI output that the patient has stage 4 cancer based on the AI’s track record of reliability and accuracy over time. However, the doctor is aware that the medical AI operates as a blackbox. Here the issues of transparency and explainability come to the fore. Transparency is typically related to the information we have about the AI model and its characteristics. By explainability, we are referring to the capacity to explain how the AI model works, and in particular, its function in producing the specific recommendation or output. 77 The problem of blackbox medicine can adversely impact the doctor–patient relationship. As highlighted by commentators, ‘If doctors do not understand why the algorithm made a diagnosis, then why should patients trust the recommended course of treatment?’. 78
Let us assume that the AI recommendation that the patient has stage 4 cancer is wrong. Furthermore, due to AI opacity, the bases for the erroneous AI recommendation are concealed from the doctor. Insofar as negligence liability is concerned, one central issue is whether the doctor can reasonably foresee the risks of harm to patients when utilising blackbox medical AI in medical diagnosis and treatment. If the AI decision-making process or model is opaque such that the AI errors or the risks of harm from the AI output are not foreseeable by any reasonable doctor, it would not be
In reality, however, the level of foreseeability of risks of harm cannot be
Negligence law typically enquires whether it would be fair and reasonable to impose on the hospitals and medical doctors the duty to take reasonable care to prevent harm to the patients. Such an enquiry would be overly generic for determining negligence liability for opaque AI. Instead of the traditional approach to assessing whether reasonable steps have been taken to prevent harms to patients from AI errors, the enquiry should be more focused on whether the medical doctors and hospitals have taken, in cases of opaque AI, reasonable steps to comply with the appropriate validation and testing procedures prior to the use of medical AI as well as the monitoring process. 79 These validations, testing procedures, and monitoring processes may pre-empt certain types of AI errors in diagnosis and treatment arising (e.g. from unrepresentative training data, dataset shift, inappropriate algorithms used to perform the analysis, post-deployment problems and so on). In other words, the medical doctor and hospital should not be responsible for the assessment of the correctness of the AI recommendations due to the opacity problem unless the recommendations are clearly counter-intuitive to them. The medical doctors and hospitals would, however, remain legally responsible if they have failed to take reasonable procedural steps in ensuring AI reliability according to expected industry practices and norms, and such failure has resulted in foreseeable injuries to patients. What would be considered reasonable procedural steps would likely evolve with new techniques and methods to mitigate AI risks. 80
The discussions above on the AIHGle regarding verification and checks by AI Implementers are relevant here. In addition, AI reliability may be evidenced by quality assurance measures and certification processes applied to medical AI, and AI implementers ought to take care to properly train their employees to ensure they are sufficiently competent to use medical AI in diagnosis and treatment.
It is also important to remember that the developments on AI opacity and explainability are not static. In future, where the level of explainable AI increases with respect to AI recommendations on medical diagnosis and treatment, the expectations of doctors’ standard of care in these aspects may rise correspondingly. That said, based on current AI technology, there is reason to temper our expectations due to limitations in respect of post hoc explainability of specific AI decisions in clinical practice. For example, the use of heat maps or saliency maps in medical imaging would not be able to reveal to the clinician the specific material information within the heat map that was applied by the AI model to arrive at the model’s diagnosis of the disease. 81
Medical advice
With respect to medical advice, the Australian and UK courts have adopted a patient-centric approach to determining the material information to be disclosed by medical doctors to their patients instead of deferring to the medical professional opinion based on the
The Singapore Court of Appeal decision in
The Singapore position, however, took another turn with the new statutory provision on the standard of care in respect of medical advice given by medical practitioners to patients – s 37 of the Civil Law Act – which took effect on 1 July 2022. It is reliant on the
In contrast to the
With regard to the use of medical AI, we may enquire preliminarily whether the hospital or doctor should even disclose to the patient that they are utilising medical AI. In this respect, the AIHGle state that ‘[e]nd-users of AI-MD (e.g. medical practitioners, patients) should be informed that they are interacting with an AI-MD’. 87 What does AI-MD ‘interacting’ with patients mean? Where the hospital collects data of the patient’s conditions and feeds them to a CDSS to diagnose the patient’s conditions, the patient may not be regarded as directly interacting with the AI though the hospital was indeed using AI to provide the medical diagnosis. A broader view of the word ‘interacting’ would, however, embrace both direct and indirect interactions between the patient and AI-MD. It is argued that the latter would be more consistent with ensuring transparency to patients.
Yet, the AIHGle, in stating that patients should be informed that they are interacting with AI-MD, might have placed an overly heavy onus on AI implementers. A patient does not normally expect the medical doctor to refer to the medical treatises, equipment, and case studies when giving advice to patients about their health conditions. They are the normal tools of the trade, so to speak. Similarly, there should not be an onus on the medical doctor to reveal, as matter of course, the use of AI-MD in the provision of medical services to the patient.
The situation would be different, however, where the use of AI is experimental or novel thereby posing risks to the patient. Such information related to risks of harm for the patient and/or the limitations of the medical AI would likely be material whether seen from the perspective of the reasonable patient or the particular patient. Similarly, under the UK and Australian approaches to medical advice, the risks of harm from using medical AI would be material information to disclose to the patient. For the Singapore position under s 37, it is the peer professional opinion which would have to assess whether the information relating to the AI-MD was indeed material to a reasonable patient or the particular patient for the purpose of undergoing treatment or following medical advice.
Although the guideline that patients should be informed that they are interacting with AI-MD is not mandated as a legal standard for medical advice under the UK, Australian, or Singapore approaches, a practice that adheres to such a guideline would be conducive towards transparency and probably enhance patient autonomy and trust. According to the Ethical Principles for Artificial Intelligence in Medicine (2024) issued by the College of Physicians and Surgeons of British Columbia, 88 clinicians using AI must be ‘transparent about the extent to which they are relying on such tools to make clinical decisions’. If the patient is made aware of the use of medical AI, they may be able to make specific enquiries of the doctor as to the purpose or role of the medical AI and even the potential risks and alternatives available.
As stated in
This does not mean that the medical doctor is legally obliged to provide to the patient comprehensive and detailed information of the AI model outputs. Although the medical doctor would be expected to possess some technical knowledge of the AI model, the doctor should, based on such knowledge combined with his or her medical expertise, communicate material information to the patient that is tailored to the patient’s circumstances and comprehensible by the latter. 92 The key is to enable the patient to make informed choices based on the material information.
We turn now to the vexed issue of AI opacity where the outputs from the medical AI used by the hospital or clinic are inscrutable and non-intuitive. To counter the problem of AI opacity, the AIHGle advocate that the decisions or recommendations from AI-MD should endeavour to be explainable and reproducible. 93 This is consistent with a number of international 94 and local 95 guidelines. As mentioned above, the first demands an explanation of the functions of the AI in generating the specific recommendation or output. The level of explanation depends on the ‘varying expectations’ of the recipients. What is explainable AI to medical practitioners implementing the AI-MD may be quite different from explainable AI to the patient, given the disparities in medical expertise and clinical experience. Insofar as medical advice is concerned, we are interested in explainability vis-à-vis the patient. The second point from the AIHGle about reproducibility relates to the need for procedural validation of AI reliability as a benchmark, as we have discussed above, for assessing whether the conduct of the hospitals and doctors has been reasonable in the use of AI-MD.
The AIHGle guideline on explainability is connected to the doctor’s relationship with the patient in respect of medical advice. The UK General Medical Council (2013) code of conduct, for example, specifically requires clinicians to be prepared to justify their own decisions. 96 At the heart of this is accountability to one’s professional practice. The clinician should not be abdicating his or her role to offer professional expertise and competence to their patients.
The College of Physicians and Surgeons of British Columbia acknowledges in the Ethical Principles for Artificial Intelligence in Medicine (2024) 97 that AI tools can produce results which are ‘difficult to interpret’. Yet clinicians must be ‘capable of interpreting the clinical appropriateness of a result reached and exercising clinical judgement regarding findings’ when medical AI is used.
Furthermore, as a matter of law, AI opacity may be contrary to the objective of informed decision-making by the patient. Such opaque AI systems do not disclose to the healthcare professional how they have derived the recommendations relating to the patient’s diagnosis or treatment. In deep learning systems, for example, even medical experts would not be aware of the voluminous layers of underlying data utilised by the medical AI to derive the recommendation, the weights attached to specific data or the intricate connections that may be drawn from among the disparate data. As the healthcare professional would not be cognisant of the reasons underlying the recommendations, he or she cannot comprehend the recommended decision nor share the material information in a way that the patients can understand 98 for the purpose of making an informed decision.
The AIHGle guideline, in advocating that the decisions or recommendations from an AI-MD ‘should endeavour’ to be explainable and reproducible, is stated with a soft touch. It implicitly recognises the challenges in ensuring that medical AI are explainable and reproducible. In any event, these regulatory guidelines are not legally binding. The Royal Australian and New Zealand College of Radiologists similarly advocate that in implementing AI, consideration be given to ‘how a result that can impact patient care be best understood and explained by a health care professional’ and that ‘AI should provide results that are interpretable/understandable in the current clinical context’. 99
Relatedly, it may also be argued that explainability is not an absolute or unqualified good that prevails over other values in all cases. There may be a trade-off between explainability and accuracy and at times, it may be reasonable to sacrifice some explainability in favour of greater accuracy.
Proof of damage from alleged breaches by AI implementers: determining causal responsibility of AI implementers
On the question of proof of damage, the issue of remoteness of damage should not normally pose a major obstacle to the claim in negligence. For hospitals and medical doctors, personal injuries or death suffered by patients arising from AI errors should, save for rare and unexpected health effects, be reasonably foreseeable in the event of the negligent use of AI-MD.
The more difficult issue concerns causation of damage arising from the use of medical AI. A negligent doctor, however blatant or obvious the negligent conduct, will avoid legal liability if the patient cannot demonstrate that the negligence resulted in the harm suffered by him or her. The possible causes of the injuries to the patient include the negligence of the AI manufacturer/developer and/or the AI implementer and the AI software itself.
Let us first consider the scenario where the problem lies with the AI software alone, and assume that the AI developer and implementer (or any other party) are not at fault. In such a case, there is no possibility of a claim in negligence against any legal person notwithstanding that the patient may have suffered injuries resulting from the AI software errors. There is no need for a causation analysis here.
Let us now modify the scenario to one where the AI implementer is negligent but assume that the AI developer (or any other party) is not at fault. The enquiry would be as follows: but for the implementer’s negligence in the decision to implement and use the medical AI, would the patient have suffered the harm? In the case of inadequate medical advice, but for the medical doctor’s negligence in failing to disclose certain material information about the use of medical AI, would the patient have decided to proceed with the treatment and suffered the resulting injuries? 100 If the response in each case is ‘no’, the breach of the AI implementer would be regarded as having established a counterfactual causal linkage to the patient’s injuries.
Consider an alternative scenario where both the AI developer and AI implementer are at fault. The AI commits an error in the diagnosis of a cancer that is attributable to the AI developer’s negligence in design or testing of the AI model, and the medical doctor, without proper verification and checks, accepts the AI recommendation. As a result, the medical doctor administers the wrong treatment and the patient suffers physical harm. In terms of causation analysis, if each of the breaches by the AI developer and medical doctor has on a balance of probabilities materially contributed to the patient’s injuries, they would be regarded as causally responsible for the patient’s injuries even if their relative contributions to the injuries cannot be ascertained with precision. 101 Here, we assume that the medical doctor’s conduct in administering the wrong treatment, though careless, was not wholly unreasonable or reckless so as to break the chain of causation arising from the AI developer’s initial negligence.
On the contrary, where the AI produces the correct recommendation on the diagnosis of a cancer, but the medical doctor rejects the AI recommendation and takes an alternative course of action that led to the patient’s injury, the effective cause is the incorrect diagnosis by the doctor. But for the doctor’s decision to reject the AI output and to proceed instead with an alternative action, the patient would not have suffered injury.
Complications may arise due to the range of parties (developers, clinicians, researchers and manufacturers) who are involved in building a particular AI system. In instances involving multiple parties, one approach to unravelling the causation issue might be to examine the source of the problem that eventually led to the patient’s injury. Where the problem originated from the defects in the source code, it is
Instead of AI developers and data suppliers, the problem may lie with the actual implementation of the AI model in the provision of medical services. The acts and omissions of the hospital or doctor with respect to the utilisation of the AI system should therefore be scrutinised. The AI developer may argue that the AI implementer’s wholly unreasonable or reckless usage of medical AI constituted the
In the absence of direct evidence on breach and causation, the rule of
Although
To further analyse the application of the rule, let us assume the patient suffered injury due to an AI error in prescribing treatment to the patient, and the AI was opaque. For a start, the cause of the accident would be unknown because the opaque AI system could not provide a causal explanation for the incorrect treatment. The AI error constitutes the ‘res’ that calls for an explanation rather than the defendant’s act or omission though the defendant may point to evidence of other plausible causes to rebut any inference of negligence.
109
On one level, it may be argued that the criterion of control is satisfied on the basis that the hospital (AI implementer) was in control of the AI system which led to the event that resulted in the patient’s injury. However, this factor of control purportedly exercised by the hospital may be doubted in instances where the AI outputs are opaque to the hospital. Moreover, the hospital may counter that the real control would have been exercised by the AI developer(s) who had designed the AI system or that no single entity or person had exercised the relevant control.
110
Finally, it has to be shown that the accident resulting in the patient’s injury would not have occurred in the ‘ordinary course’ if proper care had been taken. If the AI outputs are not expected, traceable, or explainable, it would be difficult to satisfy this second criterion which is dependent upon events occurring in the ‘ordinary course’.
111
Nonetheless, despite the obstacles and challenges, we cannot dismiss entirely the possibility of
Defence of volenti: has the patient willingly and knowingly consented to risks of negligence in the implementation and use of medical AI?
Even if the patient were able to prove the damage, the medical doctor or hospital may raise legal defences to defeat the claim. The defence of volenti relies on the defendant showing that the patient had voluntarily and knowingly assumed the risks of negligence arising from the use of medical AI. 113 Discharging this burden of proof would be challenging in most situations. For the defence to succeed, the patient must not only be aware of the nature of the risks but also their extent. Where the patient is unaware of the basic functions of the medical AI that has been utilised, there would likely not be sufficient knowledge of the nature of the risks. Furthermore, from the preceding discussions, risks from the use of medical AI can arise from various possible sources, for example, non-representative data producing biased recommendations, opacity of deep learning systems, and non-explainability regarding the AI outputs which sources may not be known to the patient. While the type of harm may be reasonably foreseeable, the extent of the risks is normally not easily anticipated.
Although limited in scope, the consent to risks defence may be tenable in cases where the patient in question possesses expertise or knowledge of the risks of medical AI and the hospital or medical doctor has fully informed the patient of the extent of foreseeable risk, or where the perceived dangers from the use of medical AI are obvious to any reasonable patient. Furthermore, we should note that patients’ knowledge about medical AI technology will likely increase over time as its usage becomes more prevalent. If patients, armed with sufficient knowledge of the nature and extent of the risks, nonetheless consent to proceed with the use of medical AI, doctors and hospitals may in an exceptional circumstance be absolved from liability for medical negligence.
Conclusion
Hospitals and doctors would generally owe legal duties to patients under their care. It would in normal circumstances be reasonably foreseeable that the breach of duty of the hospitals and medical doctors in the implementation or use of AI would lead to personal injuries suffered by the patients. At present, hospitals and medical doctors are unlikely to be able to defeat the claims of injured patients on the basis that the latter had voluntarily and knowingly consented to the risks of negligence from the use of medical AI.
There are, however, two areas where there will be considerable debate on the application of the law, if not the appropriate modifications to the law in order to accommodate AI medical technology. In respect of standard of care, this article has argued that regulatory guidelines, albeit non-legally binding, provide the relevant contexts for medical AI development and implementation respectively that can aid judges in contextualising the legal principles in negligence. The regulatory guidelines have raised important issues relating to the reasonableness of relying on medical AI and the validation procedures, the acceptance and rejection of AI recommendations, the extent of transparency regarding AI usage and disclosures of material information regarding AI risks, benefits, and complications to patients, and the explainability of AI outputs.
The AIHGle is meant to be a ‘living’ document that will be updated in line with developments in AI. These guidelines may over time be gradually adopted with modifications as technological standards accepted by the medical profession and AI community. Should they be regarded as sound and practical by industry practitioners in the future, the guidelines can potentially influence social expectations and norms within the industry and thereby influence and shape the content and/or application of legal principles in negligence.
In addition, whether there is liability in negligence would depend on establishing the causation of damage attributable to the alleged negligence of AI developers and/or implementers. The determination of causal responsibility is contextual, relying on ascertaining the source of the problem leading to the patient’s injuries, the parties involved and the medical doctor’s role in the decision-making process. In complex situations involving multiple parties, one cannot dismiss the possibility of shared responsibility among the manufacturers/developers, hospitals which implemented the AI systems and the individual doctor caring for the patient. 114 In this regard, with a view to minimising the risks of AI errors, greater scope, and opportunities for collaborations between AI developers and implementers should be encouraged through enabling medical doctors to be more involved in the design of the AI, for AI developers to better understand its application to clinical practice and for medical doctors to evaluate the impact of AI on clinical practice. 115
Footnotes
Acknowledgements
The author would like to thank Tan Boon Heng for his review of an earlier draft as well as Natalia Mai Do Ngoc and Pang Cheng Kit for their research assistance. All errors are my own.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author would like to thank the Lee Foundation for funding support through the Lee Kong Chian Professorship (2022–2023).
1.
Frank Griffin, ‘Artificial Intelligence and Liability in Health Care’,
2.
A.S. Miner, N. Shah, K.D. Bullock, B.A. Arnow, J. Bailenson and J. Hancock ‘Key Considerations for Incorporating Conversational AI in Psychotherapy’,
3.
4.
Anton Ravindran, ‘AI and Healthcare’ in A. Ravindran,
5.
Shuroug A. Alowai, Sahar S. Alghamdi, Nada Alsuhebany, Tariq Alqahtani, Abdulrahman I. Alshaya, Sumaya N. Almohareb, Atheer Aldairem, Mohammed Alrashed, Khalid Bin Saleh, Hisham A. Badreldin, Majed S. Al Yami, Shmeylan Al Harbi and Abdulkareem M. Albekairy ‘Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice’,
6.
Shania Kennedy, “AI May Be More Prone to Errors in Image-Based Diagnoses Than Clinicians”, 10 May 2022 at
. See also Leslie, D.,
7.
Taro Makino, Stanisław Jastrzębski, Witold Oleszkiewicz, Celin Chacko, Robin Ehrenpreis, Naziya Samreen, Chloe Chhor, Eric Kim, Jiyon Lee, Kristine Pysarenko, Beatriu Reig, Hildegard Toth, Divya Awal, Linda Du, Alice Kim, James Park, Daniel K. Sodickson, Laura Heacock, Linda Moy, Kyunghyun Cho and Krzysztof J. Geras, ‘Differences Between Human and Machine Perception in Medical Diagnosis’
8.
9.
The Australian Therapeutic Goods Act 1989; Therapeutic Goods (Medical Devices) Regulations 2002 (amended in 2019 which amendments took effect from 25 February 2021); and Regulatory changes for software based medical devices (tga.gov.au) (August 2021).
10.
11.
The European Medical Device Regulations (i.e., Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices); see Regulation 2020/561 amending Regulation (EU) 2017/745 on medical devices adopted by the Council and the Parliament on 23 April 2020; and Guidelines on the Qualification and Classification of Standalone Software used in Healthcare within the Regulatory Framework of Medical Devices (MEDDEV guidance 2.1/6) (July 2016). DocsRoom - European Commission (europa.eu).
12.
The Singapore Health Products Act 2007 (2020 Rev Ed); and the Singapore Health Products (Medical Devices) Regulations 2010.
13.
14.
See Charlotte A. Tschider, ‘Medical Device Artificial Intelligence: The New Tort Frontier’
15.
There is a revised R2.0 version i.e., the Regulatory Guidelines for Software Medical Devices – A Life Cycle Approach (April 2022).
16.
This action plan came on the heels of the 2019 discussion paper, ‘Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device’.
18.
N.L. Crossnohere, M. Elsaid, J. Paskett, S. Bose-Brill, J.F.P. Bridges, ‘Guidelines for Artificial Intelligence in Medicine: Literature Review and Content Analysis of Frameworks’
19.
Baptiste Vasey, Myura Nagendran, Bruce Campbell, David A. Clifton, Gary S. Collins, Spiros Denaxas, Alastair K. Denniston, Livia Faes, Bart Geerts, Mudathir Ibrahim, Xiaoxuan Liu, Bilal A. Mateen, Piyush Mathur, Melissa D. McCradden, Lauren Morgan, Johan Ordish, Campbell Rogers, Suchi Saria, Daniel S. W. Ting, Peter Watkinson, Wim Weber, Peter Wheatstone, Peter McCulloch and the DECIDE-AI expert group, ‘Reporting Guideline for the Early-Stage Clinical Evaluation of Decision Support Systems Driven by Artificial Intelligence: DECIDE-AI’,
20.
See
21.
Ibid.
22.
See ‘Good Practices for Health Applications of Machine Learning: Considerations for Manufacturers and Regulators’ (2022).
23.
24.
For example, the Royal Australian and New Zealand College of Radiologists, ‘Ethical Principles for Artificial Intelligence in Medicine’ (2023).
25.
See generally Robert Veal, ‘Autonomous Technology in Shipping: An increased role for negligence product liability?’ in
26.
For example, AI and Digital Regulations Service for health and social care - AI regulation service - NHS (innovation.nhs.uk); Indian Council of Medical Research, ‘Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare’ (2023); Netherlands Ministry of Health, Welfare and Sport, ‘Guideline for high-quality diagnostic and prognostic applications of AI in healthcare’ (2021).
27.
The US Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA), ‘Good Machine Learning Practice for Medical Device Development: Guiding Principles’ (October 2021).
28.
See
29.
Ash Samanta and Jo Samanta, ‘Conclusion: Clinical Guidelines and the Law of Medical Negligence’ in Jo Samanta and Ash Samanta, eds.,
30.
31.
See
32.
33.
34.
35.
See Ricky R. Nelson, ‘Covert RCRA Enforcement: Seeking Compensatory Damages under the Federal Tort Claims Act for Environmental Contamination’
36.
For example, Indian Council of Medical Research, ‘Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare’ (2023) (‘There should be appropriate provisions for disciplinary (legal or financial) actions in case the providers fail to comply with these regulations. The relevant stakeholders should be made liable to pay compensation to the users in case of any harm or injury arising from the use of AI technologies.’ (at p. 33); Abu Dhabi’s Department of Health, ‘Policy on Use of Artificial Intelligence (AI) in the Healthcare Sector of the Emirate of Abu Dhabi’ (2018).
37.
Megan Prictor, ‘Where does responsibility lie? Analysing legal and regulatory responses to flawed clinical decision support systems when patients suffer harm’
38.
For example,
39.
Gary Chan Kok Yew, ‘Mind the Gaps: Assessing and Enhancing the Trustworthiness of Mental Health Apps’,
40.
Andre D. Selbst, ‘Negligence and AI’s Human Users’ (2020) 100
41.
42.
43.
44.
See generally,
45.
See generally,
47.
AIHGle, para 4.
48.
Section 2.2.2 on ‘Responsibility’.
49.
AIHGle, para 5.2.3.
50.
51.
52.
53.
54.
J. Raymond Geis, Adrian P. Brady, Carol C. Wu, Jack Spencer, Erik Ranschaert, Jacob L. Jaremko, Steve G. Langer, Andrea Borondy Kitts, Judy Birch, William F. Shields, Robert van den Hoven van Genderen, Elmar Kotter, Judy Wawira Gichoya, Tessa S. Cook, Matthew B. Morgan, An Tang, Nabile M. Safdar, Marc Kohli, ‘Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement’,
55.
Goddard K., Roudsari A., Wyatt J.C., ‘Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators’
56.
Tinglong Dai and Shubhranshu Singh, ‘Conspicuous by Its Absence: Diagnostic Expert Testing Under Uncertainty’,
57.
AIHGle, para 5.5.2.
58.
AIHGle, para 5.5.3.
59.
AIHGle, para 5.6.1.
60.
AIHGle, para 5.2.4.
61.
AIHGle, para 5.3.1.
62.
AIHGle, para 5.3.4.
63.
AIHGle, para 6.1.3.
64.
AIHGle, para 6.1.4.
65.
AIHGle, Foreword.
66.
See ‘Practical Guidance on Agile Regulatory Governance to Harness Innovation’ which was drafted in the context of developing the (OECD) Recommendation of the Council for Agile Regulatory Governance to Harness Innovation, C/MIN(2021)23/FINAL (Oct. 6, 2021), available at
. The Practical Guidance document states that “[n]on-binding standards, by being easier to adopt and offering more flexibility in implementation, can help address the regulatory challenges raised by innovation”.
67.
Gary KY Chan, ‘Medical AI, Standard of Care in Negligence and Tort Law’ in Gary Chan Kok Yew and Man Yip, eds.,
68.
69.
70.
71.
72.
73.
See s 5O (1) of the
74.
75.
76.
77.
Rita Matulionyte, Paul Nolan, Farah Magrabi, and Amin Beheshti, ‘Should AI-enabled medical devices be explainable?’
78.
D. S. Watson*, J. Krutzinna, I. N. Bruce, C. E.M. Griffiths, I. B. McInnes, M. R. Barnes, L. Floridi, ‘Clinical Applications of Machine Learning Algorithms: Beyond the Black Box’,
79.
W. Nicholson Price, ‘Medical Malpractice and Black-Box Medicine’ in Glenn Cohen, Holly Lynch, Effy Vaynea and Urs Gasser, eds.,
80.
See e.g., A Proposed Medical Algorithmic Audit Framework to Mitigate Risks in Xiaoxuan Liu, Ben Glocker, Melissa M. McCradden, Marzyeh Ghassemi, Alastair K. Denniston, Lauren Oakden-Rayner, ‘The Medical Algorithmic Audit’,
81.
Marzyeh Ghassemi, Luke Oakden-Rayner, Andrew L. Beam, ‘The false hope of current approaches to explainable artificial intelligence in health care’,
82.
(1992) 109 ALR 125.
83.
[2015] 2 WLR 768 at [73].
84.
85.
Ibid.
86.
S 37(2)(a).
87.
Section 2.2.3 on ‘Transparency’.
89.
90.
Section 5.4.1.
91.
See also Indian Council of Medical Research, ‘Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare’ (2023), p. 11.
92.
Miranda Mourby, Katharina Ó Cathaoir, Catherine Bjerre Collin, ‘Transparency of Machine-Learning in Healthcare: The GDPR & European Health Law’,
93.
Section 2.2.4 on ‘Explainability’.
94.
Mark Ryan and Bernd Carsten Stahl, ‘Artificial Intelligence Ethics Guidelines for Developers and Users: Clarifying Their Content and Normative Implications’,
95.
Model AI Governance Framework (Second Edition), Infocomm Media Development Authority and Personal Data Protection Commission, at pp. 15 and 44-45 (on explainability) and p. 50 (reproducibility).
97.
Note 88 above.
98.
Jens Christian Bjerring and Jacob Busch, ‘Artificial Intelligence and Patient-Centered Decision-Making’
99.
The Royal Australian and New Zealand College of Radiologists (2023), note 24 above.
100.
For example,
101.
102.
Weston Kowert, Note, ‘The Foreseeability of Human–Artificial Intelligence Interactions’,
103.
104.
Ibid at [66].
105.
E.g.,
106.
107.
108.
Ibid at [108].
109.
B. Bartlett, ‘Clinical Negligence in an Age of Machine Learning: Res Ipsa Loquitur to the Rescue?’,
110.
Scott J. Schweikart, ‘Who Will Be Liable for Medical Malpractice in the Future? How the Use of Artificial Intelligence in Medicine Will Shape Medical Tort Law’,
111.
Brandon W. Jackson, ‘Artificial Intelligence and the Fog of Innovation- A Deep-Dive on Governance and the Liability of Autonomous Systems’,
112.
113.
See generally, Amy L. Stein, ‘Assuming the Risks of Artificial Intelligence’ (2022) 102
114.
The Royal Australian and New Zealand College of Radiologists, note 24 above, principle #8.
115.
Torbjørn Gundersen and Kristine Bærøe, ‘The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models’
