Abstract
As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research projects advance in a way that respects norms and principles. This article proposes to draw from the European Union’s AI Act currently in development to shape these frameworks. Although, in the current form of the draft (as of August 2023), the obligations of the AI Act do not apply to scientific research, it is most likely that they will still have a strong impact on AI research considering the need to anticipate market placement or to test new tools in real world conditions. This article investigates what the risk-based approach in the AI Act implies for research ethics and highlights some AI Act obligations of particular value to implement for ethics review processes.
Introduction
As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, research ethics constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the level of research. This article contributes to the elaboration of research ethics frameworks for projects developing and/or using AI to ensure AI research advances in a way that respects norms and principles and that potentially harmful impacts are mitigated as early as possible.
For such framework, this article proposes to draw inspiration from the approach, criteria and obligations laid down in the draft AI Act (in its version at the time of writing, i.e. August 2023) of the European Union (EU). The current draft is comprised of the European Commission’s AI Act proposal of 21 April 2021 (European Commission, 2021a) together with the amendments adopted by the European Parliament (EP) on 14 June 2023 (European Parliament, 2023). Since the adoption of the EP’s amendments, the so-called trilogues between the EP, the European Council and the Commission are ongoing, with a final text expected by the beginning of 2024. Although, in the current form of the draft, the obligations of the AI Act do not apply to scientific research, it is most likely that these will nonetheless have a strong impact on AI research considering the need to anticipate market placement or to test new tools in real world conditions. Although this article focuses more specifically on research ethics in the European context, such as on the work carried out by research ethics committees (RECs), it draws from research done beyond this area and its recommendations might be relevant beyond Europe as well, such as to the US and its Institutional Review Boards (IRBs) that review research projects from an ethical standpoint. Furthermore, given the international attention that the drafting of the EU’s AI Act attracts already, the Act is expected to influence and impact AI innovation beyond the EU. Particularly, besides EU-based providers, the AI Act will also apply to third-country providers that place AI systems on the EU market, which will likely create a strong inclination for compliance with the Act beyond the EU.
The article starts by presenting the current AI ethics and governance landscape and the need for operationalising high-level abstract principles into concrete requirements for implementation in practice. Based on this review, it points to growing demand for research ethics frameworks for this operationalisation. The second section is dedicated to a brief overview of the AI Act, the operationalisation of AI principles, and the research exemption. Finally, it draws a series of recommendations for research ethics frameworks from the Act’s approach, criteria, and obligations. Eventually, it seeks to make AI ethics at the research level both more effective and thoughtful through the establishment of a strong research ethics framework.
The emerging need to operationalise AI ethics in research ethics frameworks
AI ethics guidelines and their operationalisation
AI ethics has seen intense developments since 2015, with numerous governmental and international bodies, institutions and companies producing guidelines, frameworks and sets of principles for AI governance. Notably, the High-Level Expert Group (HLEG) on AI published the ‘Ethics Guidelines for Trustworthy AI’ in 2019 and the UNESCO adopted ‘Recommendation on the Ethics of Artificial Intelligence’ in 2021 (High-Level Expert Group on AI, 2019; UNESCO, 2021). Jobin et al. (2019) reviewed documents developed globally with ethics principles or guidelines for AI in 2019 and identified 84 of these. The authors highlighted a growing consensus around key principles, including transparency; justice, fairness and equality; non-maleficence; responsibility and accountability; and privacy.
1
While it is easy to find agreement on the importance of these principles, the question remains as to
Recognising growing consensus on the ‘what’ of AI ethics, that is, its key driving principles, Morley et al. (2020) have asked about the ‘how’, that is, how to implement these principles in practice. The need to move from the
To respond to these challenges of operationalisation and effectiveness of ethics principles, experts have called to implement processes to ensure compliance with the emerging norms and to bring about a ‘robust ethics-regulation interface’ (Delacroix and Wagner, 2021). A key element among these initiatives is the AI Act, which will place obligations on AI developers and deployers. However, complementary institutional responses are also needed to bring about a proper AI governance framework. Here, we focus on the critical role research ethics frameworks can play to operationalise AI ethics at the research level.
The emerging role of research ethics for AI
The first rocks of modern research ethics were laid with the Nuremberg Code, providing research ethics principles to govern medical experiments on human beings. Since then, research ethics frameworks expanded beyond biomedical research to cover other types of disciplines involving human participants, such as anthropology or sociology. More recently, they also extended beyond protection of human participants to cover potential impacts on communities, the society, and the environment at large, including after the end of the project (Shilton et al., 2021). This evolution is particularly timely for AI research and the potential harms AI systems bring about during research and once systems are deployed (Seedhal et al., 2023). Over the last 10 years, experts have highlighted the need for research ethics review to cover AI (Ada Lovelace Institute, 2022; Calvo and Peters, 2018; Ferretti et al., 2020, 2021; Liu et al., 2020; Metcalf and Crawford, 2016; Santy et al., 2021). Some organisations have started to implement research ethics processes for AI projects. For example, in 2020–2021, the European Commission developed a research ethics framework for EU-funded projects using and/or developing AI. This framework is now in place and has become a requirement for all projects funded under the Horizon Europe Framework Programme (European Commission, 2021b). Other notable examples of institutional implementation of AI ethics processes include the NeurIPS conference’s ethics process and Stanford University’s ‘Ethics and Society review board’ (Bengio and Raji, 2021; Bernstein et al., 2021).
In what follows, we highlight three key areas for adaptation of existing research ethics frameworks to best assess AI projects.
Three areas of adaption of existing research ethics frameworks
The first area concerns the
The second area of adaptation concerns
The third area of adaptation concerns the need to ensure
This article is a contribution to ongoing efforts to develop ethics review frameworks for AI research by drawing inspiration from requirements spelled out in the current draft of the AI Act. It does so for three main reasons. Firstly, the AI Act draft offers particularly useful processes to address the three areas of needed adaptation highlighted above. Secondly, considering the need to anticipate placement on the market, it is reasonable to expect that researchers and developers will want to ensure compliance with requirements from the AI Act already at the research stage. Thirdly, drawing from this new piece of legislation to develop research ethics for AI is of particular value to build a ‘robust ethics-regulation interface’ for AI governance and to avoid as such ethics-washing and promote an effective and consistent approach (Delacroix and Wagner, 2021). Although we recognise the distinct value and role of ethics as opposed to the law, we also recognise that these normative frameworks are related, and both contribute to shaping society in desirable ways and to mitigate potential harms. The connection between the law and the ethics being particularly strong in the areas of bioethics and research ethics (Mittelstadt, 2019), drawing from emerging AI law to fill existing gaps in research ethics for AI, appears as a good practice.
The draft AI Act in brief
The risk-based approach
The draft AI Act adopts a risk-based approach, meaning that different regulatory requirements apply to different AI systems, depending on the level of risk foreseen for a given AI system: unacceptable risk, high-risk, limited risk, and no risk. AI systems that pose an unacceptable risk to fundamental rights, democracy, the rule of law or the environment, such as subliminal or purposefully manipulative techniques, exploiting people’s vulnerabilities or social scoring, are prohibited (Recital 27; Art. 5). According to Article 6(2), AI systems listed in Annex III of the AI Act stand for critical use cases and are considered high-risk if they pose a significant risk of harm to the health, safety, or fundamental rights of natural persons or, in some cases, to the environment. The AI Act focuses on these high-risk AI systems. Moreover, the AI Act applies to certain AI systems that pose a limited risk and impose certain transparency obligations on these (Art. 52). Although there is a separate provision for these transparency obligations applicable to certain AI system (i.e. Art. 52), transparency obligations apply equally to high-risk AI systems (Art. 13). Finally, all remaining AI systems that pose a very low or no risk are not subjected to specific AI Act obligations.
General principles applicable to all AI systems and their operationalisation
As part of its amendments, the EP added a new provision to the AI Act (Art. 4a), laying down general principles applicable to all AI systems. These general principles are the same as the HLEG Requirements for Trustworthy AI: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; and social and environmental well-being (High-Level Expert Group on AI, 2019: 14). The HLEG requirement ‘accountability’ is missing in the AI Act, likely because the separately proposed AI Liability Directive (European Commission, 2022) will tend to and further operationalise this principle. The inclusion of the HLEG Requirements for Trustworthy AI in a legally binding instrument is an important step towards their operationalisation. According to the EP’s amendments, Article 4a(2) states that the general principles are operationalised by means of the specific high-risk obligations laid down in Chapter 3 of Title III of the AI Act (i.e. Articles 9–15 and subsequent ones) for high-risk AI systems, by means of the obligations set out in Article 28 to 28b for foundation models, and through Articles 28, 52 or the application of harmonised standards (Art. 40), technical specifications (Art. 41) and codes of conduct (Art. 69) for all other AI systems falling under the AI Act. Hence, the general principles are to be enforced through the specific AI Act obligations laid down in said provisions, in addition to which the Commission and the AI Office will incorporate the general principles in standardisation requests and technical guidance (Art. 4a(2)-(3)). This article supports this approach of operationalising the general principles and argues that several of the AI Act’s high-risk obligations serve well as concrete steps to giving meaning and definition to AI ethics principles and making them usable in ethics review processes for research.
The research exemption
In its current draft form, as per Article 2(5d), the AI Act does not apply to scientific research, meaning research, testing, and development activities regarding AI systems prior to their placement on the market or putting into service, provided that these activities are conducted respecting fundamental rights and other applicable EU law. Consequently, while the specific obligations imposed by the AI Act will not apply, the respect for fundamental rights must be guaranteed during any research activity concerning AI. Additionally, the testing of AI systems in real world conditions is not covered by the scientific research exemption of the AI Act. Nonetheless, it is most likely that the AI Act’s obligations will still have a strong impact on AI research considering the need to anticipate placement on the market or to test in real world conditions.
Drawing recommendations from the AI Act for research ethics processes
This article recommends RECs in universities and academic environments and other ethics experts reviewing AI projects, such as in business settings and in the industry, 3 to assess these projects according to the AI Act’s risk categories. The recommendation includes the application of extra care and a stricter ethics review process when it comes to research on AI practices that are prohibited under the AI Act, or even the exclusion of prohibited AI practices from research. 4 In addition, we recommend adapting current ethics review processes by drawing inspiration from the obligations for high-risk AI systems as spelled out in Articles 9 to 15. 5 It should be noted that the authors are aware of the critiques of the AI Act, such as the difficulties that come with the risk-based approach and the corresponding debate on potential self-assessment or not, or the exemption of research from the scope of the Act (Veale and Zuiderveen Borgesius, 2021). However, for the purposes of this article, we do not question the draft AI Act as it currently stands, but rather focus on the practical advancement of AI research ethics by drawing inspiration from some of the provisions of the Act. To strengthen and make AI research ethics more useful, we make three sets of recommendations that follow directly from the three identified areas of adaption highlighted in the first section of this article: (1) risk management, (2) data governance and management and (3) transparency and reporting.
Risk management system
Firstly, the risk management system proposed by Article 9 of the AI Act provides a useful guidance to establish an ongoing process of identification and mitigation of risks. This helps address the first area of needed adaptation highlighted above. As in Article 9, RECs and other ethics experts involved in AI projects should establish a risk management system for continuously and iteratively assessing the respective AI system throughout the entire research project. This includes the identification and assessment of the known and reasonably foreseeable risks that the AI system can pose to the health or safety of persons, their fundamental rights, democracy and rule of law or the environment when the AI system is used in accordance with its intended purpose (Art. 9(2)(a)). It also entails the adoption of appropriate and targeted mitigation measures (Art. 9(2)(d)) in a collaborative by-design approach (Art. 9(4)(a)). Seeing that special consideration should be given to how the AI system may adversely impact vulnerable groups of people and children (Art. 9(8)), the identification and adoption of appropriate mitigation measures should flow from a thoroughly conducted fundamental rights impact assessment, as laid down in Article 29a of the AI Act. Additionally, to prevent or minimise the risks to health, safety, fundamental rights or environment, the appropriate level of human oversight of the AI system should already be established during the research phase (drawing inspiration from Art. 14).
Data governance and management
Secondly, to ensure proper data protection and management, we recommend drawing inspiration from Article 10 of the AI Act. Following obligations contained in this article, we encourage RECs and other ethics experts to focus their efforts on preventing or reducing data infringements, erroneous decision-making, and bias through an appropriate data governance approach. This includes documentation of personal data usage and assessments of and techniques to ensure the availability and quality of training and input data. This can help protect privacy, ensure quality of output and prevent erroneous decision-making, including algorithmic bias, which constitute some of the major ethical concerns regarding AI systems.
Transparency and reporting
Finally, inspired by Article 11 of the AI Act, researchers involved in an AI research project should draw up the technical documentation of the respective AI system or application under research. RECs and other ethics experts involved in the project should monitor the appropriateness of such technical documentation. Particularly for the operationalisation of the principle of transparency, such documentation should include the AI’s technical features to ensure the system’s functioning and output is transparent, including logging capabilities for record-keeping (Art. 12 and 13(3)(ea)). Furthermore, technical and organisational measures should be established to ensure the foreseen users are informed that they are interacting with an AI system (Art. 52) and enabled to understand and use the AI system appropriately by generally knowing how the AI system works and what data it processes (its characteristics, capabilities and limitations of performance (Art. 13(3)(b)), allowing them to explain the decisions taken by the AI system to affected persons (Art. 13(1)). Additionally, it is recommended that the technical documentation includes measures for ensuring the AI system’s appropriate level of accuracy, robustness, safety, and cybersecurity (Art. 15(1)(b)).
Conclusion and next steps
This article presents a step in the direction of operationalising AI ethics into research ethics frameworks to govern AI research. It recommends drawing inspiration from the AI Act currently in development to elaborate the different requirements needed for such frameworks. Despite the research exemption under the current draft of the AI Act, considering AI research in anticipation of AI market placement or testing new tools in real world conditions, this article particularly invites to establish (1) a risk management system, (2) a process to ensure data governance and management and (3) proper transparency and reporting mechanisms.
This is a contribution to the emerging area of research ethics frameworks for AI research. But many more efforts are needed in this area, notably to ensure appropriate expertise of research ethics reviewers to assess AI risks and to ensure sufficient resources (Ada Lovelace Institute, 2022; Seedhal et al., 2023). 6 Another major challenge is the fact that much AI research occurs in big technology companies that often do not have an ethics review process in place, contrarily to universities. Hence, we recommend the establishment of independent and/or third-party ethics review in industry settings, following a similar format as those in universities. Considering that AI products will have to meet the obligations of the AI Act, companies will be familiar with these requirements and hence implementing them at the research level should facilitate the process (Ufert and Goldberg, 2023).
Footnotes
Acknowledgements
We are grateful to Sara Domingo Andres and Dr Zachary Goldberg for their insightful review of this article. We are also thankful to the three anonymous reviewers for very useful feedback on an earlier version of this article and to the editor of
Declaration of conflicting interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: The authors of this article are employed by Trilateral Research, which proposes services to clients to ensure their AI system achieve compliance and implement best practices in Responsible AI and data ethics.
Funding
All articles in Research Ethics are published as open access. There are no submission charges and no Article Processing Charges as these are fully funded by institutions through Knowledge Unlatched, resulting in no direct charge to authors. For more information about Knowledge Unlatched please see here:
.
This article was developed as part of three European Union-funded research projects: the DARLENE project (Deep AR Law Enforcement Ecosystem) which received funding under the EU’s H2020 research and innovation programme (Grant Agreement number: 883297), the TechEthos project (Ethics for Technologies with High Socio-Economic Impact) which received funding under the EU’s H2020 research and innovation programme (Grant Agreement number: 101006249), and the iRECS project (improving Research Ethics Expertise and Competences to Ensure Reliability and Trust in Science) which received funding under the EU’s Horizon Europe research and innovation programme (Grant Agreement number: 101058587). Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union (EU) or the European Research Executive Agency (REA). Neither the European Union nor the granting authority can be held responsible for them.
Ethics approval
The research upon which this article is based did not require ethics approval.
