Abstract
With regard to the handling of dual use research, the dominant approach in Germany aimed at mitigating dual use risks emphasizes the freedom of research and the strengthening of academic self-regulation. This article presents this approach as one example for a framework for handling security-relevant research, underlines the need for awareness-raising about risks of security-relevant research, and, more generally, highlights some of the dilemmas researchers and legislators face when dealing with security-relevant research. The article furthermore presents the key questions developed by the German Joint Committee on the Handling of Security-Relevant Research to provide guidance for researchers and institutions when they address possible research of concern. It applies these key questions in a case study of a well-publicized experiment in which artificial intelligence and drug discovery technologies were used to determine their dual use potential in identifying highly toxic chemical substances. Moreover, it discusses the utility of the framework applied in Germany and concludes that this approach is practicable. Given the strong emphasis on the researchers’ own responsibility, however, awareness of dual use risks and risk mitigation strategies should be further enhanced and an academic culture of responsible handling of security-relevant research should be promoted.
Keywords
Introduction
More and more scientific disciplines are recognizing the problem that legitimate and beneficial research can produce findings or products that could be misused for illegitimate purposes. Initially focused on technologies that can be used for civilian and military purposes, nowadays the problem of dual use is frequently framed with a focus on a wider range of possibilities of misuse as “dual use research of concern” (DURC) or “security-relevant research.” In their recommendations regarding the handling of security-relevant research, the German Research Foundation (DFG) and the German National Academy of Sciences Leopoldina define security-relevant research and research of concern as follows:
“Security-relevant research includes scientific work that has the potential to produce knowledge, products or technologies that can be misused by third parties to harm human dignity, life, health, freedom, property, the environment or peaceful coexistence. This is designated as “of concern” if the misuse can be immediate and the potential damage is significant.”
1
Raising awareness about the possibility of dual use and the risks of misuse is one important strategy for dealing with security-relevant research, especially considering that such awareness does not yet exist to a sufficient extent in all relevant scientific communities. But how can research be managed in such a way that the freedom of research is preserved and the benefits can be exploited, on the one hand, while on the other hand the security risks are limited as much as possible? In Germany, the German Research Foundation (DFG) and the German National Academy of Sciences Leopoldina are promoting a bottom-up approach which focusses on the responsibility of researchers and research institutions for recognizing and limiting the misuse potential of their work and simultaneously emphasizes the need to maintain the freedom of research as much as possible. To achieve this, the two organizations recommend, among other things, the establishment of research ethics committees, in German “Kommissionen für Ethik sicherheitsrelevanter Forschung” (KEF), in all research institutions. Their function is to evaluate potentially security-relevant research upon request and advise researchers on how to proceed with this research (DFG and Leopoldina, 2014).
In the following section, this article briefly presents this approach in the context of the discourse regarding handling security-relevant research in Germany. It then describes an experiment that was conducted outside Germany and has been well documented and widely publicized: an experiment on the “dual-use of artificial-intelligence-powered drug discovery” (Urbina et al., 2022, 2023a, 2023b). In a hypothetical discussion, we treat this experiment as a potential case for consideration by a German KEF (ignoring the fact that it was not carried out in a research institution, but by a private company which would technically not be expected to establish a KEF). To this end, we apply the key questions formulated by the DFG and Leopoldina’s Joint Committee which are intended to support the ethical assessments by KEFs. All statements and descriptions used in this article are based on publicly available information about the experiment in question and our own interpretation of the case. Our aim is not to evaluate the experiment and the researchers’ approach in any way, but to present the German approach as one example of a framework for handling security-relevant research, to discuss its effectiveness and to present possible ways to further strengthen it. To that end, given the high degree of responsibility assigned to the individual researchers, we emphasize the need for continued awareness-raising about risks of security-relevant research and about possible risk mitigation strategies. Finally, and more generally, we highlight a dilemma researchers and legislators face when dealing with security-relevant research – namely that by raising awareness of a security-relevant issue, researchers may create or reinforce security risks.
The German framework for handling security-relevant research
As in other countries, the debate about security-relevant research gained traction in Germany from 2012 onward after researchers in the Netherlands and the USA had published the results of so-called gain-of-function experiments in which they had modified a highly pathogenic avian influenza virus (H5N1) to explore its potential for human-to-human transmission (Herfst et al., 2012; Imai et al., 2012). Their stated aim was to explore the pandemic potential of the virus to facilitate pandemic preparedness measures. They, and parts of the wider scientific community, justified the research with the necessity of gaining insights into the potential risks of further evolution of this pathogen and the expected devastating impact of a potentially resulting avian influenza pandemic. Critics emphasized that such experiments could create or increase the very risks against which their results should protect in the first place. The H5N1 experiment came to be seen as an epitome of the DURC concept.
In the wake of the international H5N1 debate, the German government tasked the German Ethics Council to provide an Opinion (“Stellungnahme”) on the nexus of the freedom of research and biosecurity. The Council published its Opinion in May 2014 and recommended measures to raise awareness of the dual use problem, including national and international codes of conduct, and the establishment of an institutional and regulatory framework to control DURC research in Germany (Deutscher Ethikrat, 2014).
At the same time, the German Research Foundation (DFG) and the German National Academy of Sciences Leopoldina issued their recommendations for handling security-relevant research, promoting a bottom-up approach that emphasizes the responsibility of researchers and research institutions to contain the risks of security-relevant research (DFG and Leopoldina, 2014). In contrast to the Ethics Council, the DFG-Leopoldina approach is not limited to the life sciences but spans all academic disciplines that might produce security-relevant research results. To support the implementation of their recommendations, the DFG and Leopoldina established the Joint Committee on the Handling of Security-Relevant Research that carries out outreach and awareness-raising activities, among other things, and serves as contact point for the local KEFs. 2
Salloch (2018, p. 4) has characterized the German discourse pertaining to the handling of dual use research as an example of the “tension between the two options of scientific self-control vs governmental oversight.” The strong emphasis on the freedom of research is rooted in German history: The National Socialists in the 1930s and 1940s limited academic freedom to the extreme and instrumentalized science for utterly inhuman political purposes. As part of Germany’s efforts to come to terms with this part of its history, freedom of research was codified in Article 5(3) of the German Basic Law (Grundgesetz) (Salloch, 2018, p. 3ff.). 3 Also specific to the German case, some universities provide so-called civil clauses which allow for restricting the freedom of research for the sake of the preservation of peace. Schlögl-Flierl and Merkl (2018, p. 100) provide the following definition: “The Civil Clause is aimed at recognizing military research and its possible dual use. It is a voluntary agreement to engage exclusively in civil research and teaching at universities.” This approach to regulating research, discussed controversially in Germany, may lead to situations where research with dual use potential could not be carried out at all in some universities (Schlögl-Flierl and Merkl, 2018).
In the following, we focus on the approach promoted by the DFG and Leopoldina and discuss the artificial intelligence (AI) experiment described in Urbina et al. (2022) using a catalog of guiding questions developed by the Joint Committee. 4 We chose this approach for two reasons. First, through the regular outreach of the Joint Committee to research institutions and KEFs, this approach has spread widely in Germany and has become the leading model for handling security-relevant research. In contrast, the recommendation of the Ethics Council to enact a legal framework for handling dual use research of concern (DURC) was not implemented. Second, the emphasis on the freedom of research and trust in scientific self-regulation are characteristics of the German discourse. We therefore present this approach in a hypothetical discussion of a concrete case to explore its utility, and also to contribute to an international discussion about possible approaches to this problem. This way we hope to facilitate a comparison of approaches in different countries and help raise awareness at a systematic level for the underlying problem that efforts to raise awareness of certain security risks may reinforce these very risks.
The Experiment: “dual use of artificial-intelligence-powered drug discovery”
The experiment used as a case study in this article was carried out by the company Collaborations Pharmaceuticals Inc. in Raleigh, NC, USA in 2021 in preparation for a workshop in Spiez, Switzerland. The workshop focused on scientific and technological advances in chemistry and biology as well as on their dual use aspects and possible policy implications (Spiez, 2021). Through its publication (Urbina et al., 2022), this experiment received worldwide attention and reactions. 5
Collaborations Pharmaceutical Inc. usually use their software combined with artificial intelligence for drug discovery to identify suitable target molecules with the greatest possible efficacy and lowest possible toxicity (Urbina et al., 2022). Computer-aided drug discovery has long been established in pharmaceutical and medicinal chemistry research, both academically and commercially as a first step in drug development and to establish quantitative structure-activity relationships. Recently, the algorithms have been supplemented by machine learning (see e.g. Chan et al., 2019; Deng et al., 2022; Mak et al., 2022; Vijayan et al., 2022). The aspect of dual use in this research field is also beginning to draw attention (e.g. Campbell et al., 2023; Ekins et al., 2023).
For their invited contribution to the workshop, the researchers reversed their usual strategy and searched for chemical compounds with the greatest estimated toxicity, with the known and highly toxic chemical warfare agent VX as point of reference. They modified an artificial intelligence-based algorithm in such a way that within less than 6 hours it yielded 40,000 compounds estimated to be more toxic than VX (Urbina et al., 2022). It remained unclear how many of these compounds could be synthesized and would pose a security threat in reality. However, the results included the chemical structures of known chemical warfare agents other than VX which had not been included in the training of the AI model and which were hence generated by the machine on its own. It can thus be assumed that among the newly identified compounds that met the target criteria, at least some might be equally or more toxic and synthesizable (Urbina et al., 2022).
VX is a nerve agent of the class of organophosphorus compounds. As a known warfare agent with no civilian application, it is listed in Schedule 1 of the Chemical Weapons Convention (CWC), an international treaty prohibiting the utilization of toxic chemicals for weapons purposes. 6 Strict regulations apply to the possession, manufacture and trade of listed chemicals for members of the CWC, and states must declare corresponding activities to the Organization for the Prohibition of Chemical Weapons (OPCW). 7 AI-supported identification without actual synthesis or production of the molecules is not prohibited, however.
The CWC, one of the most successful international disarmament treaties, entered into force in 1997. With 193 member states, it is almost universal in reach. 8 Nevertheless, there remain concerns – fueled among other things by the use of chemical weapons in Syria and in assassinations – that some states maintain illegal chemical weapons programs (Arms Control Association, 2024). Moreover, non-state actors could obtain and use chemical warfare agents, as has happened already for example in Syria and Iraq. 9 Scrutinizing security and dual use risks related to chemical weapons, and raising awareness about these risks, is thus an important contribution to reducing chemical weapons threats.
Hypothetical consideration of the experiment according to the DFG-Leopoldina recommendations
This section scrutinizes the AI-VX experiment using the guiding questions provided by the Joint Committee of the DFG and Leopoldina from a research ethics perspective. This is an entirely hypothetical undertaking; in reality, this experiment was never considered by a KEF in Germany.
The procedure of dual use assessment according to the Joint Committee guidelines comprises three steps. The first cluster of questions is directed at the researchers and aims at a self-assessment from their own perspective. The second cluster of questions concerns an assessment of the experiment by a KEF. The final cluster of questions includes “key questions for the conclusive assessment and consultation by the KEF.” 10 A KEF would become involved in the assessment of potentially security-relevant research only upon request of the researchers themselves or possibly of other actors in the respective research institution, depending on the institution’s own guidelines. If after consideration of a case, a KEF finds that there is an immediate, significant risk of misuse, the follow-up procedure likewise depends on the institution’s stipulations which would determine whether recommendations provided by the KEF are binding and what other steps should be taken to handle the research in question responsibly and mitigate possible risks. Such steps could include a recommendation to limit the publication of research results, for example by leaving out certain security-relevant information, or to consult with experts in the field, as will be discussed below. Since there is no legal framework regulating security-relevant research in Germany, no legal action is to be expected, unless the research itself were found to be illegal and carried out nonetheless. The following section applies the key questions to the AI-VX experiment on the basis of public statements by the researchers and our own interpretation of the situation.
“Key questions for researchers the answer to which may suggest the need for consultation by the KEFs”
The DFG-Leopoldina Joint Committee recommends that the researchers themselves use three guiding questions for an initial self-assessment of their research in potentially security-relevant areas. Firstly, they are to consider whether the planned research is security-relevant as defined by the DFG and Leopoldina. Secondly, the researchers should reflect on whether cooperation partners involved in the research project could cause security risks, and thirdly, they are supposed to identify possible conflicts with legal regulations requiring scrutiny by a compliance office.
Since the researchers carried out the AI-VX experiment upon invitation to a chemical disarmament-related conference, and since they designed it as a proof-of-concept study of its dual use potential, they were aware of possible security implications of their work upon starting the experiment, though not before (see Urbina et al., 2022). It was unclear to them, however, just how immediate and significant this dual use potential would be (Urbina et al., 2023a; see also Calma, 2022). After seeing the results of the experiment, the researchers concluded that this potential was significant, and they took measures to limit the risks stemming from their research and refrained from conducting follow-up research that could have enhanced the immediate risk of misuse even further (Urbina et al., 2022, 2023b, p. 691). In their deliberations on how to proceed with their results, they weighed concerns that publication might incite malign interest in this kind of work with concerns about leaving the wider AI and drug discovery community unaware of the dual use implications of their work. After consultation with security experts, they opted for the publication of the results as a contribution to raising awareness particularly in the AI community where dual use risks of drug discovery had not been discussed previously (Urbina et al., 2022). However, as part of their risk mitigation strategy, they omitted the most critical details such as the exact algorithm and the structure formulas of the novel highly toxic compounds. They also declined requests from government agencies to pass this information on to them (Wired, 2022).
The second question relates to the problem that cooperation partners may pose security risks. In the case of the AI experiment under scrutiny here, there were no cooperation partners involved in the experiment; only after they had realized the security implications of their research did the researchers partner with chemical disarmament and security experts to identify appropriate ways of handling their research results (Urbina et al., 2022).
As for the legal regulations, since the experiment involved only calculations of structure-activity prediction, and no synthesis of a chemical warfare agent, the prohibitions of the CWC do not apply here. Had the researchers intended to publish or transfer their results in full detail, which they did not, it would have been prudent to inquire whether national dual use export regulations would apply. 11
“Key questions for processing the query by the KEFs”
If a KEF is contacted with a case of potentially security-relevant research it, too, can use guiding questions when assessing the case. It could firstly consider the specific objectives of the planned research. Secondly, it could check whether it has the necessary expertise at its disposal to handle the case or whether external experts should be consulted. Furthermore, the KEF could discuss whether it is able to identify risks and benefits of the planned research and perform a risk-benefit analysis based on the given state of knowledge and information. As a fourth step, the KEF could assess whether any security-relevant results and the risks they pose are of a novel nature. The KEF could, fifthly, try to estimate the probability of security-relevant research results being disseminated and enabling immediate and concrete misuse. Likewise, the KEF could estimate the potential damage resulting from misuse and determine availability of suitable countermeasures. Finally, the KEF could consider any detrimental effects that could arise if the proposed research was not carried out. The following considerations would likely be relevant in an evaluation by a KEF.
The researchers designed the experiment as a proof-of-concept study in response to an invitation to a workshop dealing with security-relevant research in chemistry and biology. Their objective thus was to contribute to this discourse and add insights on the dual use potential of the use of AI in drug discovery, which had not been considered before. The researchers did not indicate any motivation or research interest beyond the purposes of this workshop, and they were already aware of potential security implications of their work from the start (Urbina et al., 2022).
Regarding expertise, a KEF can be composed of experts of various disciplines depending on the respective research institution, and potentially security-relevant research can occur in many disciplines and can take many different forms. Research proposals under consideration by a KEF will probably be highly specialized, so recourse to outside expertise will likely be helpful in many cases. In the present case, expertise from chemistry, IT/Artificial Intelligence, chemical weapons disarmament and security policy would be essential to assess the security implications of the experiment.
Provided that the KEF has the necessary expertise at its disposal, the state of knowledge regarding the experiment would likely be sufficient to consider its potential risks and benefits, although the actual outcome of the experiment far exceeded the researchers’ expectations (Urbina et al., 2023b). The problem of dual use research has been discussed with regard to chemical weapons, including at previous Spiez Convergence Workshops. There is also information available on the potential role of AI in chemical weapons (non-)proliferation (e.g. Krin and Jeremias, 2023), though according to Urbina et al., this has not yet reached the field of AI-supported drug discovery (Urbina et al., 2023b). Furthermore, enough knowledge is publicly available regarding the chemical structures and synthesis of toxic substances, including known chemical warfare agents, to at least roughly estimate the feasibility of synthesizing novel highly toxic compounds generated by the AI experiment. Thus, while it might not have been possible to determine all risks in advance, there is a sufficient knowledge base to make an informed approximate assessment of potential risks.
The risk of the use of highly toxic chemical agents for criminal, terrorist or other weapons purposes is not new, and there are numerous known toxic compounds that could be employed with less effort. However, novel substances identified by AI algorithms based on their toxicity, if synthesized, weaponized and used, could be more difficult to detect or identify than known agents, and harder to be attributed to a perpetrator. Their production and use as chemical warfare agents would be prohibited under the CWC, regardless of whether their structures and precursor materials are listed in the CWC schedules. However, CWC members are not obliged to declare toxic chemicals not listed in the schedules, and trade with unlisted precursors is less strictly regulated. Depending on the required precursors for novel compounds, the clandestine production of smaller quantities could therefore be easier, provided that synthesis is feasible. In that case, existing risks of chemical weapons proliferation and use could increase.
As regards the AI component, the use of AI to identify highly toxic compounds is a common procedure in drug discovery, where it is usually applied to exclude compounds with high levels of toxicity unsuitable for pharmaceutical purposes. However, the use of a chemical warfare agent as reference parameter and the explicit focus on identifying (rather than excluding) highly toxic compounds represents a novel perspective (Urbina et al., 2022). While the experiment would therefore not create fundamentally new risks, it could draw attention to the risks of misuse and the applicability of AI for predicting chemical warfare agents. 12 In addition, the use of AI and machine learning increases the amount of data that can be processed and the speed at which results can be obtained, and it expands the range of actors who could use this method to identify potential highly toxic compounds. Existing risks could therefore be exacerbated, and results generated through this experiment would probably be novel and could not simply be reproduced from previous research.
The experiment was designed as a virtual experiment without any intention of synthesizing new highly toxic compounds. If the exact algorithm and chemical formulas identified in the experiment were to be published, and later reproduced by actors with malign intent, and if synthesis proved feasible for some of the novel highly toxic compounds, there could indeed be an immediate risk of misuse. This would presuppose sufficient financial resources, time, expertise and equipment to reproduce the AI calculation and to synthesize highly toxic chemical compounds. Moreover, novel substances would need to be tested for their usability and utility in chemical attacks. 13 However, while all this would potentially pose challenges, the obstacles to applying the research results would not be insurmountable.
If novel warfare agents were predicted, synthesized and used, considerable damage could be assumed. While the use of a chemical agent would likely be detected quickly by the appearance of characteristic symptoms, it could take some time for a novel toxic substance to be precisely identified and for suitable countermeasures and treatment methods, beyond standard first response measures of decontamination and personal protection, to be determined and, if available and effective, to be applied. The risk to an unprotected population would probably be high. In sum, if a novel chemical agent identified in the experiment were to be used effectively, the potential damage could be severe.
The experiment could also provide benefits by raising awareness of the possibility to identify new highly toxic compounds with comparatively simple means using AI (Urbina et al., 2022), and by highlighting the security-relevant nature of such research more generally. Consequently, the underlying method, which is widely established for drug discovery, could be included in future discussions about chemical weapons non-proliferation and disarmament measures. While the dual use potential might be obvious in the case of VX as reference substance, it may be less obvious for other reference substances. The experiment could therefore contribute to a heightened risk awareness among other researchers in the fields of AI and drug discovery (see Durrani, 2022) as well as control authorities and decision-makers in the security policy realm. Moreover, the omission of this experiment or of its publication would not rule out that comparable activities could be carried out by other actors in a less responsible manner or covertly with malicious intentions (Urbina et al., 2022).
“Key questions for the conclusive assessment and consultation by the KEFs”
The Joint Committee of the DFG and Leopoldina finally provides five key questions which a KEF might employ in its final assessment on a given case. The first question asks whether the research in question could pose direct and immediate risks of misuse. The second question inquires whether the project should “be reassessed by the KEF at a more advanced stage when the security-relevant risks can be judged more easily.” The third question considers the compatibility of the project with constitutional principles and guidelines of the institution in which the research is carried out, while the fourth question addresses possible risk mitigation strategies such as imposing conditions on the planned research or adopting an adequate publication strategy. Finally, the Joint Committee raises the question how researchers involved in the project can “be made aware of the ethical aspects of security-relevant research.”
In their own classification of the experiment under scrutiny here as security-relevant, the researchers reflected on its dual use potential and misuse possibilities (see Durrani, 2022). The application of the key questions of the Joint Committee pointed in a similar direction. Since the potentially dangerous compounds were not synthesized, and since the researchers did not disclose details of their research results, the risk of misuse might not be immediate, but still high, provided that synthesis of some novel highly toxic compounds was feasible with commonly available equipment and expertise (Durrani, 2022).
The capability to produce such substances might more likely reside with a state actor, especially if it already possesses the highly specialized know-how and infrastructure necessary for the production and dissemination of chemical warfare agents. For non-state actors, the hurdles to use the substance for terrorist or criminal purposes would likely be higher. Precursor materials for known chemical warfare agents are controlled for OPCW members and by national export control regulations and would thus be hard to obtain. If novel highly toxic compounds identified by the AI algorithm could be synthesized using unregulated precursors, acquisition of these substances might be easier. And if such substances were actually synthesized, weaponized and used in an attack, the resulting damage could be significant. Urbina et al. point out that they used fairly standard and widely available equipment, such as a standard computer, open-source AI models, and freely accessible toxicity databases to train the AI for the experiment (see Calma, 2022). It would be difficult to assess how easily the experiment could be reproduced and the results applied for malign purposes, but a thorough risk assessment would have to take this possibility into account.
Compatibility of the experiment with the institution’s principles and guidelines would have to be assessed on a case-by-case basis. In carrying out their experiment, the researchers did not consult any ethical guidelines (Urbina et al., 2023b, p. 692). In their reflections, however, they emphasize that ethical guidelines are available for more general aspects of chemistry and AI, and they suggest developing more specific recommendations “tailored to the AI in drug discovery community” (Urbina et al., 2023b, p. 692). Such guidelines could also be consulted by a KEF when formulating its final assessment.
Regarding risk mitigation strategies, the researchers decided against withholding their findings entirely, but opted for limited publication of the results, leaving out information that would be crucial for reproducing this exact experiment and exploiting its results for malign purposes. In addition, they embedded their reporting on the experiment in a discussion of the security implication and devised 10 recommendations regarding the future handling of such research (Urbina et al., 2023b). 14 In their own risk-benefit analysis they concluded that these measures sufficiently reduced the security risks and that the positive effect of raising awareness of the dual use aspects of AI in drug discovery outweighed the benefit of suppressing the research results entirely (Urbina et al., 2023a, 2023b; see also Calma, 2022).
As regards raising awareness among researchers for dual use aspects of their work, the researchers of this particular experiment were already sensitized to this problem by the framework conditions of their project. Generally, as the researchers emphasize in their own reflections (Urbina et al., 2023b), awareness of the dual use potential of AI in drug delivery is low, and the topic should be included in curricula not only in the field of chemistry, but also in computer science (Urbina et al., 2023b).
Conclusion
The experiment described in Urbina et al. (2022, 2023a, 2023b) shows that a KEF would have to deal with any specific case in a detailed and well-reflected way. The hypothetical consideration of this particular experiment shows that the questions provided by the Joint Committee can guide and support this process effectively, even if individual points of the evaluation likely remain open to interpretation. It also illuminates two areas in the approach promoted by the DFG and Leopoldina that might merit further attention. First, in the bottom-up approach emphasizing the responsibility of the researchers, the onus to first identify potentially security-relevant research and to involve a KEF or ethics committee is on the individual researchers. This requires sufficient awareness of security-relevant aspects of the given research field as well as of risk mitigation strategies such as for example an adaptation of the publication strategy or the consultation of external experts to discuss potential security implications of the research. It also presupposes a willingness to pay heed to potential security risks and accept associated delays or roadblocks in the research process. In short, it requires an academic culture in which the consideration and responsible handling of potential security risks is engrained. To foster such a culture of responsibility, as also envisaged by the DFG/Leopoldina in their recommendations, intense and ongoing awareness-raising efforts are necessary.
Second, as the analysis in this article shows, it can be important to consult experts from within and possibly outside the institution to support researchers and the KEF in their own assessment, and to be able to arrive at an informed and realistic evaluation of risks and benefits of potentially security-relevant projects. However, the involvement of others in the scrutiny of a research experiment may raise issues regarding confidentiality, data protection (which is particular salient in Germany and other EU member states given the rather strict data protection regulations), and the unintentional dissemination of sensitive information. The discussion of a potentially security-relevant case by the local KEF will likely be unproblematic in this regard, since the institution’s regulations regarding confidentiality and data protection would probably apply to all KEF members. However, additional steps might be advisable to ensure the same level of confidentiality and data protection when external experts are involved. The host institution could for example conclude ad-hoc confidentiality and data protection agreements with the external experts. In addition, care should be taken that the selection of experts is in line with the objectives of the DFG/Leopoldina recommendations to minimize risks, including those pertaining to cooperation with others.
Similarly, some ethical considerations may be necessary if cases of security-relevant research are to be discussed with other external actors, such as members of other KEFs or ethics committees. Such discussions would require careful balancing of the benefits of sharing experiences and best practices in the handling of security-relevant research – which is particularly useful given the strong need for increased awareness, as identified above – and the risks of contributing to the dissemination of sensitive, security-relevant information. To mitigate the risks here, researchers might apply similar caution and possible restrictions as might be recommended in the publication of security-relevant research results.
With a broader view on research ethics, the discussion of the case of AI-powered drug discovery can be generalized to similar research. Whenever research itself is aimed at highlighting a previously unknown or unnoticed threat, we find ourselves in the dilemma we are dealing with here: Raising awareness of a potential risk of misuse can not only serve to safeguard against threats, but can also provide actors with malicious intentions with guidance on appropriate strategies to turn these intentions into action. Weydner-Volkmann and Cassing (2023) drew attention to similar problems caused by “researchers in the attack role” with regard to the phenomenon of IT security research: In “cybersecurity ethics,” there is already an awareness of the ambivalence of security research, which could be addressed more strongly in research ethics with a view to dual use. In addition, the question should be asked as to what the balance of resources should be between investing in the detection or even development of new threats, on the one hand, and research to protect against existing threats, on the other hand. 15 This question arises analogously, for example, in the areas of gain-of-function research for pathogens and cybersecurity. 16 The trade-off is also difficult because the two goods (security vs progress in knowledge/sensitization) are ultimately incommensurable: The good intention, that is, the purpose of raising awareness, must be weighed against the result, that is, possible damage caused by the highly toxic substance created in this way, at least in theory. From an ethical perspective, the approach in Germany ultimately places this difficult balancing act in the hands of the researchers and the KEFs as organs of their ethical self-regulation. A comparative look at other countries would be enlightening to assess how things are done there and to discuss the advantages and disadvantages of different approaches in more detail. We hope that this article will stimulate such further comparative discussions in research ethics with regard to dual use.
Footnotes
Acknowledgements
The authors are members of the Joint Committee for the Handling of Security-Relevant Research of the German Research Association (DFG) and the German National Academy of Sciences Leopoldina. They have written this article in their personal capacity and are expressing their personal opinions only. The authors wish to acknowledge the helpful feedback by Dr. Johannes Fritsch on an earlier version of this text, the constructive comments provided by the anonymous reviewers, and valuable research assistance by Louise Lüdke. Parts of the section on key questions for KEFs and the conclusion were translated from an earlier version with the support of Deepl.com.
Correction (June 2024):
Article is updated; for further details please see the “Declaration of conflicting interest” section at the end of the article.
Declaration of conflicting interest
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Thomas Lengauer is share holder in the company BioSolveIT GmbH, Sankt Augustin, Germany, which develops and licenses software for the early phases of drug development.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethical approval
The authors declare that research ethics approval was not required for this study.
