Abstract

Artificial Intelligence (AI)-assisted diagnosis and treatment planning based on Machine Learning (ML) is rapidly being implemented in healthcare. Following the update in March 2025, the Food and Drug Administration (FDA) list of approved AI- and ML-enabled medical devices comprises more than 1000 devices. 1 Among these are software using ML algorithms to analyse mammograms to detect breast cancer, CT scans to detect stroke and heart sounds to detect cardiac abnormalities. The FDA has also approved autonomous AI diagnostic systems. A system using ML-driven image-analysis for detecting diabetic retinopathy without the need for specialist intervention was approved as early as 2018.
A key concern in the ethical debate on AI is accuracy. While the best performing ML models – the deep learning models – achieve a high accuracy (specificity and sensitivity) in predictions and classifications, they are still not completely accurate. 2 Moreover, the inaccuracy may sometimes reflect system biases related to characteristics of certain patient groups. The inaccuracy may not only lead to over- and undertreatment, but also to discrimination of patient groups. 3
Another key concern in the ethical debate on AI is explainability. The classifications and predictions of deep learning models are opaque to the user. 4 While several techniques have been suggested for making such models somewhat explicable, they are still opaque when compared with physician decision-making. In providing an explanation of a diagnosis, a physician can explicate why the diagnosis is the best possible explanation of the individual patient’s signs, symptoms and indicators. 5
How can we protect the patient if the most likely ML-systems to be implemented in healthcare will remain inaccurate and inexplainable? In this analysis, we (1) point out shortcomings of current risk-based AI regulatory efforts, and (2) argue that specific individual rights are essential for the protection of patients. We conclude that there is urgent need for healthcare stakeholders to engage in AI regulatory efforts to define specific individual patient rights and the corresponding physician and AI provider duties.
Current regulatory efforts: the European Union AI Act as an exemplar of risk-based regulation
Recently, the European Union (EU) passed legislation for the Regulation of AI, commonly referred to as the AI Act. 6 The AI Act takes a risk-based approach to the regulation of AI. In this approach, AI systems are classified as to whether they pose an unacceptable risk, a high risk or a limited or minimal risk. Medical AI constitutes a high risk according to the AI Act classification criteria. The AI Act imposes obligations on both providers and deployers of high-risk AI systems.
Providers of high-risk AI systems must ensure that the system is trained, validated and tested on data of an appropriate quality, that technical documentation is readily available before and after market entry, that automatic logging of AI system functioning is enabled, that AI system output is interpretable and that information about data quality and system performance is available (transparency), that effective and ongoing human oversight of AI system risks is possible and that the AI system conforms to appropriate standards of accuracy, robustness and cybersecurity. Providers of high-risk AI systems must also implement a risk and quality management system that ensures ongoing compliance with the listed requirements. After market entry, providers must maintain logs and continuously monitor performance and safety, and report incidents to a national oversight authority.
Deployers of high-risk AI systems are also subject to several obligations. If the deployers provide services of general interest – for example, a hospital – they must conduct a fundamental rights impact assessment. That is, they must assess the potential negative impact of an AI system on fundamental rights, such as the rights to non-discrimination, privacy, data-protection, and so forth, and describe oversight measures as well as the measures taken to mitigate the identified risks. Furthermore, they must implement human oversight from trained personnel, ensure the relevance of input data and the compliance of the system with the above listed requirements.
Problem 1: individual patient preferences
The risk-based approach taken in the AI Act aims to provide protection for patients and the wider healthcare system by setting high standards for the development and use of AI systems. However, it operates on the basis of predefined categories of risks and of risks related to a specific AI system. In consequence, the risk-based approach has limitations in terms of accounting for differences in patients’ interests and concerns in relation to AI decision-making.
Patients may have different views concerning acceptable levels of error and potential discrimination. This translates into different criteria for acceptable levels of accuracy (specificity and sensitivity) and accuracy testing, bias and bias testing (discrimination), explainability and transparency, personal data use – and not least how these different features of AI systems should be balanced against each other. Some patients will in certain contexts prefer explainability over accuracy and vice versa. 7 They may have different views as to the appropriateness of AI system implementations. Patients may, for instance, regard widespread use of information retrieving chatbots in diagnostic consultations undesirable if they are not familiar with chatbots or if the chatbots are known to influence their access to healthcare. And, they may have queries in relation to the AI advice provided in their specific case.
Problem 2: dynamic effects
A risk-based approach, furthermore, has difficulty accommodating dynamic effects following the implementation of an AI system. The introduction of AI into healthcare may, for instance, cause automation bias – that is, overreliance by healthcare personnel on AI decision-making.8,9 Introducing autonomous AI could potentially replace physicians and other healthcare professionals, thereby reducing the points of contact between patients and healthcare professionals. Extensive use of AI-enabled devices could lead to deskilling of healthcare personnel – that is, the loss of diagnostic and treatment planning expertise – which may increase the vulnerability of future healthcare. 10 All of the aforementioned dynamic effects may in themselves add to reduced patient trust.
The requirement in the EU AI Act for ongoing monitoring and assessing of AI system risks, including the risk of automation bias, may go some way towards mitigating such dynamic effects. However, many dynamic effects are the result of the simultaneous implementation of multiple systems, and they are therefore hard to regulate system by system. They are aggregate effects, and underreporting and under-regulation of dynamic effects are to be expected. The problem of individual patient preferences resurfaces here with even greater force. For instance, evaluating the problem of reduced points of contact between patients and healthcare professionals is closely tied to patient preferences and values. Some patients may prefer to consult a healthcare professional, whereas others may choose to bypass that step entirely if it, for instance, offers faster access to care.
Problem 3: regulatory power and disempowerment
While the AI Act requires stakeholder participation in the development of AI systems, it still fundamentally disempowers individual patients in the protection of their interests. Whether the provider and the deployer of an AI system meet the legislative requirements is a matter that may be settled through self-assessment and assessment by a national authority. Managing risks, assessing compliance and conducting fundamental rights impact assessment of an implementation of an AI decision-support system can be done without involving the individuals concerned.
The disempowerment of individual patients arguably undermines the regulatory power of the AI Act. Individuals can be better protected if they are empowered to participate in the regulation of AI. Not only would the protection potentially be more adequate as argued above, but it would also be more effective in de facto protecting individual interests.
At the core of the three problems sketched here lies a more general issue of paternalism in healthcare. Arguably, the current delivery of healthcare is still influenced by paternalism as exemplified by the lack of consistent use of shared decision making and the persistence of ‘silent misdiagnoses’, where clinicians misread patients’ treatment priorities.11,12 It is important that the introduction of AI in healthcare does not replicate or further exacerbate this problem.
Patient rights in relation to AI diagnosis and treatment
The challenge facing healthcare is therefore to ensure not only safety and transparency of AI systems, but also adequate patient rights. But what rights? In the following, we briefly survey some exemplars of potential patient AI rights.
The right to an explanation of an AI-generated diagnosis or treatment plan
The EU General Data Protection Regulation (GDPR) establishes that individuals subjected to automated decision-making have a right to ‘meaningful information about the logics involved.’ 13 This is typically taken to constitute a right to an explanation of AI decision-making. It is unclear, however, what exactly this right entails. Common for a range of approaches to explainable AI is that they aim to answer the question of why an AI system provided a certain output by looking into the inner workings of the underlying model. The aim is not to provide a full mapping of all parameters of relevance for the output or, but rather to establish what subset of parameters were especially important for the output. While different techniques can show what input features and feature interactions that are important for the output classification, the features they identify may not be easily interpretable by physicians and patients.14,15 The techniques may also be misleading since they do not provide any explanation of why the feature is seen as important by the algorithm. 16
The right to withdraw from AI diagnosis and treatment planning
The right to withdraw from AI decision-making is a right to insist that a medical decision – a diagnosis or a treatment plan – is not substantively supported by an AI system but is decided entirely by physicians. 17 The right to withdraw enables patients to act on their interests and concerns related to AI system features (accuracy, discrimination, explainability) as well as concerns related to the range of dynamic effects previously mentioned. In its draft of a new Convention on AI, Democracy and the Rule of Law, the Council of Europe recommends that individuals are offered human alternatives to automated systems. 18 However, it is a right that would bar the healthcare services from achieving maximal cost-efficiency of introducing AI into diagnosis and treatment planning. At the governance level this right may be difficult to implement and protect as AI is already embedded in a significant number of diagnostic devices, and individual healthcare professionals may not realise this in all cases.
The right to contest AI diagnosis and treatment planning
The GDPR stipulates that where an individual is subjected to automated decision-making, the individual should have the right ‘to express his or her point of view and to contest the decision.’ 13 However, a patient merely having a right to state that he or she disagrees with a decision and contests it does little to help the patient truly challenge it. The patient should be able to make an effective contestation – that is, to point out more precisely what the disagreement and contestation concerns are and have an appropriate answer. 19 To do so requires information about the AI system involved. More specifically, the patient should have access to information about 1 the AI system's use of data, 2 the system's potential biases, 3 the system performance and 4 the division of labour between the system and healthcare professionals. A right to such information differs from a right to an explanation in that it does not so much concern what happens in the ‘black-box’, but the input and output of the black-box and the organisational implementation of an AI system.
The right to a second opinion on AI-driven diagnostics and treatment planning
The right to a second opinion is a right to have an independent assessment of a diagnosis or treatment plan supported by AI. 20 The Council of Europe has suggested a right to ‘human review’ of decisions substantively informed by AI. 18 A second opinion could alternatively be provided by an independent AI system. The right to a second opinion raises issues of costs and implementation. Thus, it is not clear who should pay for the second opinion in different types of healthcare systems, or whether it is the obligation of healthcare systems to facilitate second opinions at all. Letting an independent, different AI system issue a second opinion could partly solve some of the cost and implementation issues.
The right not to be AI diagnosed or screened based on publicly available data without consent
Social media data – photos and comments – can be used for diagnosis and screening. Studies show that ML models can accurately predict depression based on Instagram photos and Tweets.21,22 In the future, publicly available data could be used for easy and cost-effective diagnosis as well as for public health interventions aimed at early detection and prevention of disease. This development could be driven not only by data derived from social media, but also by genetic data becoming publicly accessible through the sale of databases by companies offering direct-to-consumer genetic testing. 23 However, the use of publicly available data for such purposes would raise issues of privacy as well as the potential harms of over-diagnosis and -treatment and the harms of medicalisation. The EU GDPR imposes a number of restrictions on the processing of personal data but does not provide individuals with a right not to be profiled on the basis of such public data. 24 Introducing a right not to be profiled without having provided informed consent would allow patients to balance their interest in engaging actively online or using genetic testing services against the risks involved in profiling.
Health service stakeholders must engage in regulatory efforts
Why AI specific rights – will more fundamental human rights not do? This is a false dichotomy. Fundamental human rights such as the rights to non-discrimination and data-protection support and justify the AI specific rights, but because they are fundamental, they are also very general and their precise implications for AI in the healthcare setting is a matter of interpretation. There is a need for specifying rights that directly addresses the ethical challenges associated with AI-driven diagnosis and treatment planning.
The AI patient rights exemplified raise various issues of definition, and they are unlikely to be exhaustive. Moreover, these rights have implications for healthcare providers’ obligations in relation to AI-based diagnosis and treatment planning, potentially increasing liability risks, reshaping standards of care and necessitating adjustments to clinical protocols and documentation practices. However, if the current regulatory approach does not provide adequate protection of patients, it is of utmost importance that work on AI patient rights are given priority by health service stakeholders. Medical associations and patient organisations must engage in regulatory work. It seems beyond doubt that AI will continue to transform the health services.
