Abstract

On August 1, 2024, the Artificial Intelligence Act (AI Act) entered into force: the first European legislation on AI development and use. The AI Act – having a broad scope and not specifically designed for the health care sector – is based on a ‘risk oriented approach’: the more substantial the risks associated with the AI system, the stricter the conditions for introducing it to the market. The AI Act is directly applicable in all Member States of the EU.
Since February 2025, AI systems that pose unacceptable risks are banned. We expect that the ban’s implications are minimal in health care, due to the exceptions that have been made for medical treatment. 1 In the words of the drafters of the EU-Act, the ban ‘(. . .) should not affect lawful practices in the context of medical treatment (. . .) when those practices are carried out in accordance with the applicable law and medical standards, for example explicit consent of the individuals or their legal representatives’. 2 Most AI systems used in the medical sector, for instance for the purpose of personalized treatment, will still be classified as ‘high risk’ models because they are already covered by the Medical Devices Regulation. 3 For these high-risk AI systems strict rules apply, including getting a CE marking which indicates that the system is safe and in conformity with the EU AI Act, in order to move freely within the EU internal market like other medical devices. However, for clinical researchers embedded in partly publicly funded research, it could be problematic if each new (improved) version of an AI system used in a similar setting and for the same group of patients, always required a new conformity assessment procedure. The European Commission states that an AI system must undergo a new conformity assessment when ‘substantial changes happen in the AI system’s lifecycle’. 4 This leaves the question of what qualifies as substantial changes?
If an AI system, admitted to the European market, proves to be a useful and reliable tool for vascular medicine, is it then also allowed to be implemented in the United States or in China? The answer depends on different factors. In the United States, a centralized and horizontal legal AI framework, comparable to that of the EU, is lacking. 5 However, several documents have been prepared that give substance to important AI principles and how they should be enforced. 6 Besides, state laws regulate certain aspects of AI, focusing on consumer protection as precondition for using AI tools, such as controlling one’s own data. It is expected that it will take some time before the United States aligns its policies and regulations in such a way that the disadvantages of AI are minimized without disproportionately hindering the development of AI. 7 It is challenging to compare the CE marking process to an FDA approval procedure, due to the fact that FDA approval documents lack consistency and data transparency.8,9 In China, the legislative policy puts emphasis on stimulating AI technology. At the same time, there is attention for data protection, cybersecurity and responsible development of AI systems (policies and guidelines have been drawn up). 5 Taking into account that the EU has adopted a more restrictive approach in regulating the use of AI compared with the United States and China, we expect that a CE marked system will be more easily allowed in other jurisdictions than the other way around.
All requirements for high-risk AI systems will be applicable from the beginning of August 2027. An important exception to all this is that the AI Act does not apply to AI systems that are developed and operating for the sole purpose of scientific research and development. Furthermore, research, testing and development activities before AI is introduced to the market are excluded from the scope of the AI Act. However, the AI Act applies when an AI system is tested in real-world conditions, such as in a clinical health care setting, for instance in the context of a clinical trial. 10
The AI Act is not the only law applicable to the development and use of AI systems in health care. Important legal documents at a European level include: the Medical Devices Regulation, the General Data Protection Regulation and the European Health Data Space. At a national level, worthwhile mentioning are laws regulating privacy and patient rights. Furthermore, there are documents that are set up by the medical profession, such as guidelines, protocols and standards. Together these documents provide direction on how an AI system should be developed and implemented in health care. 11
Focusing on the content of the AI Act, it requires, specifically for high-risk AI systems that: a risk management system is established; data sets meet the quality criteria; appropriate technical documentation is available; automatic recording of events is technically possible; systems operate in a sufficiently transparent way; systems can be effectively overseen by natural persons; and, that systems achieve an appropriate level of accuracy, robustness and cybersecurity. In addition, there are specific obligations for providers, deployers and other parties. As these general standards are clear, the main question is how these provisions should be interpreted and operationalized by providers and deployers in their developmental and clinical practices.
We address 2 implemental topics that are crucial to health care. First, with respect to transparency: what do we expect from the AI system, and the company developing this software, and what do we expect from the physician? The AI Act requires a high-risk system’s operation to be transparent enough for physicians to interpret the output and use it appropriately (Article 13(1) AI Act). Basic forms of AI have been deployed in health care for many years. However, there is a growing interest towards the development of more complex ‘black box’ models: capable of extracting valuable insights from highly intricate datasets. Yet, this increased complexity presents a challenge: it becomes difficult, or even impossible, to comprehend how these models arrive at specific decisions or prediction. Adopting a stringent interpretation of the transparency requirement could hamper innovation. We identify at least two important questions. Is interpretability always essential for meeting this standard, and related thereto, delivering good health care? 12 And what is the minimum level of insight and knowledge that a physician must have in order to be able to use an AI system responsibly?
Second, with respect to bias: the AI Act requires data to be: as error-free as possible; and, subjected to appropriate data governance and management practices to detect, prevent and mitigate possible biases. 13 Algorithmic bias can be caused by several things but often reflects the underlying social biases and inequalities. 14 A pressing question is: how far should we go in requiring an AI system to be free of bias? A completely bias-free system seems unfeasible, so when do we consider the implementation of a system responsible? And if we implement an AI system in health care, what is expected of physicians in terms of preventing the introduction of new biases and reporting them if they do occur?
These are merely two examples of the challenges we face when the AI Act is applied in a health care setting. Future research is needed to determine the expectations regarding the capabilities of the AI system and the ethicolegal responsibilities of researchers and physicians. Furthermore, this collaboration is vital for these crucial innovations in our health care systems. We feel that the establishment of sector specific guidance is essential, to determine what responsible development and usage of AI concretely implies. For this, it is necessary to take into account the different types of AI, including subtypes such as large language models (LLMs) and convolutional neural networks (CNNs), each associated with different (ethical) challenges. Obviously, these challenges are further shaped by the specific purpose for which the AI system is deployed.
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This article was prepared with the funding of 2 projects: ARTILLERY (funded by the European Union: Horizon Europe project number 101080983) and VASCUL-AID (funded by the European Union: Horizon Europe project number 101080947). We acknowledge all consortium partners.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
