Abstract
This article addresses the question of how the responsibilities for addressing the risks of dual use research ought to be divided. We begin by presenting the maximalist claim that proposes that since scientists are well placed to judge the potential for misuse of their studies, they alone are responsible for addressing these risks. Before assessing this position, we consider a claim that rejects the maximalist position, namely that scientists need not consider the possibility that their studies might be misused because the goods of science are so important, they should not spend time on anything but generating valuable knowledge. This claim, we argue, fails, as these goods do not always outweigh the risks of misuse. Given this conclusion we turn to assess two versions of the maximalist claim. The first suggests that when a possibility of misuse arises, scientists ought to adopt the precautionary principle (PP) to discharge their moral responsibilities. We argue that PP is problematic as it does not give much guidance on what scientists should do. An alternative to meeting scientists’ moral responsibilities is through applying a risk-benefit analysis; however, due to epistemic biases and limitations, scientists are prone to make mistakes in their analysis. We thus suggest an alternative approach, in which responsibilities are divided between scientists and agents that can conduct an analysis that is more likely to generate unbiased and comprehensive conclusions on how the risks of dual use research, should be addressed.
Keywords
Dual use research (DUR) has been at the center of attention of policymakers, politicians, scientists, editors, funders, and the general public for some time now. Recently the World Health Organization (WHO) published two reports (World Health Organization, 2021, 2022) that discuss two aspects of DUR: (1) what areas of research are likely to raise DUR-related concerns in upcoming years; and (2) what principles and values should guide scientists, editors, funders, and policymakers when addressing DUR. Governments such the U.S and the Netherlands have also been developing policies to address DUR (Bureau Biosecurity, 2023; Science and Technology Policy Office, 2015). All these activities underscore the importance given to the risks such research embodies. Yet, it is important to stand back and ask who should bear the responsibilities for addressing the biosecurity risks of DUR. Answering this question could help us gain moral clarity and also design improved decision-making procedures.
The question of the distribution of moral responsibilities received a variety of answers, among them that scientists themselves should bear such responsibility (Douglas, 2003; Kuhlau et al., 2008), that the state should be the one responsible, or perhaps that responsibility should be shared between scientists, research institutions, and the state (Evans et al., 2022; Selgelid, 2007). In this article, we examine various answers that have been put forward for dealing with the issue, and argue that the most plausible position is one in which responsibilities are divided. Focusing on the moral responsibilities of scientists within the suggested division of responsibility, we propose that scientists should have a narrow, yet important, set of duties: to be on the lookout for DUR-related risks, to submit their risky studies for review by those who are better placed to evaluate the risks, and to ultimately abide by the latter’s recommended measures, whether to continue as planned, increase biosafety measures, postpone publication, etc.
Let us clarify the scope of the article. We address the question of moral responsibilities pertaining to DUR in life sciences research only, and focus specifically on biosecurity risks. We do not address how such responsibilities should be dealt within the context of the social sciences for instance. One reason for focusing on the life sciences is their centrality within contemporary DUR debate, possibly explained by incidents like the H5N1 studies (described in Section “Dual use research”). 1 A second reason pertains to a seeming difference between the risks life sciences DUR embodies and the risks that might emanate from DUR in the social sciences. The life sciences biosecurity risks stem from the possibility that malicious third parties would misuse the research to cause harm. As in the case of the H5N1 virus, these risks are very different, and often much graver, than in the case of social sciences. Moreover, in the case of life scientists, there is a more pronounced gap between the kind of knowledge and expertise held by relevant scientists, and the risks that DUR poses, which involve agents of a kind the relevant scientist do not study (malevolent human agents). Because of the nature of the risks—serious and not transparent—the question of the division of responsibilities in the case of life sciences DUR becomes urgent and complicated, and may raise additional challenges. This is not to suggest that it is not a complicated matter in the social sciences as well. Indeed, analyzing the question of moral responsibilities in the life sciences, we hope, would shed light on how such responsibilities should be assigned in other fields of research.
The article unfolds as follows. Section “Dual use research” elucidates the notion of DUR and the risks it embodies. Section “The common argument” presents a common argument for the maximalist claim that scientists have a responsibility to take steps to make sure that their work would not be misapplied: since scientists are well placed to judge the potential for misuse of their studies, they should take steps to address such risks. We then (Section “Ignoring the risks of misuse”) consider an objection to the maximalist claim, namely that scientists need not consider the possibility that their studies might be misused by others. After dismissing this objection, we consider two versions of the maximalist claim. The first suggests that when a possibility of misuse arises, scientists ought to adopt the precautionary principle (PP) to discharge their moral responsibilities (Section “The precautionary principle”). We argue that PP as such is problematic, as it does not give much guidance on what should be done. An alternative to meeting scientists’ moral responsibilities is through applying a risk-benefit analysis (RBA) to their studies: we suggest (Section “Risk-benefit analysis”) that due to epistemic biases and limitations, scientists are prone to make mistakes in their analysis.
We thus suggest an alternative approach (Section “Sharing responsibilities”). We propose that scientists should submit their study for an RBA to be performed by a different agent, one that is not prone to the same biases, and is less prone to these epistemic limitations. The state could use existing ethics committees or create new ones that include scientists, security experts and others, thereby conducting an analysis that is more likely to generate unbiased and comprehensive conclusions. We end by discussing some complications that this suggestion might raise.
It is important to note that accounts of who should bear the responsibilities for addressing DUR risks have been proposed in the literature, as laid out below. This article assesses the most important such accounts and finds each of them wanting. By considering each position, the article adds an account that has been lacking in the literature. Indeed, it is quite surprising that given the extensive discussion of dual use research, such an account had not been offered. Thus, the article provides an account that takes into consideration some of the major views that were proposed so far, scrutinizes them systematically, ultimately offering a solution to the question of how responsibilities ought to be divided and thus to the way such research ought to be assessed.
It should be noted that because our focus here is on normative accounts of the responsibilities for addressing DUR-related risks, of the ample literature that addresses such risks, we address here only literature that directly discusses this normative question. But because of limitations of space, even within the literature addressing this normative question, we must be selective, and could only engage with what we deem to be the most important extant accounts dealing with the division of responsibilities for addressing DUR risks. This is of course an unavoidable limitation of this kind of study.
Dual use research
To start, we must first clarify the notion of dual use research. Generally speaking, DUR refers to well-intended research that generates beneficial knowledge that could also be misused to cause harm (Selgelid, 2009: 720). DUR could arise in many fields of research including behavioral studies, physics, computer science, neurology, virology, and more (World Health Organization, 2021). Yet in the last 20 years the term has been used most often to refer to life-science research, where various studies have raised biosecurity concerns (in contrast to behavioral studies that could raise social concerns like discrimination; Cello et al., 2002; Jackson et al., 2001; Tumpey et al., 2005). Such biosecurity concerns could be illustrated by the H5N1 influenza studies controversy that erupted in 2011/2 (Herfst et al., 2012; Imai et al., 2012). These studies have used various methods to generate an H5N1 influenza virus that could be transmitted via the air by respiratory droplets. The papers described the methods and the genetic mutations that enabled the virus, that otherwise lacked the capacity to be transmitted via the air between mammals, to gain this new capacity (Imperiale et al., 2018; Resnik, 2013). These studies, also labeled “gain of function studies,” were approved and funded by the US’s National Institute of Allergy and Infectious Diseases (NIAID); moreover, the WHO called for conducting such studies (Fauci and Collins, 2012). They were undertaken because the scientific and health community considered the virus a serious threat to human health since its mortality rate is extremely high. Thus, these studies were deemed extremely important and have generated important knowledge (Imperiale et al., 2018).
The controversy around these studies arose because the information they generated, it was argued, could enable nefarious agents to engineer this highly pathogenic virus and release it, thereby unleashing a deadly pandemic. Since the studies delineated the mutations that enabled the virus to be transmitted via the air between mammals, and it described how to produce such virus, the studies were deemed dangerous.
The H5N1 case helps to delineate the biosecurity risks of life-sciences DUR:
Informational risks: malevolent agents could use published information or might have unauthorized access to such information, to cause serious harms. For example, nefarious agents could use the H5N1 mutational information to engineer a deadly virus and release it.
Materiel risks: malevolent actors could gain access to products such as dangerous pathogens and use them to cause harm. For instance, rogue agents gaining access to the engineered H5N1 virus to cause harm.
Methods risks: beneficial research could generate valuable technologies and methods, and bad actors with access to such technologies could employ them to cause harm (Lev and Rager-Zisman, 2014). The H5N1 studies have described methods to create the airborne H5N1 virus.
The common argument
Failures of scientists, or of other human agents, to meet their moral responsibilities can be divided into three main categories of wrongdoings: intentional wrongdoing, recklessness, and negligence (Douglas, 2003; Tannenbaum, 2018). Since DUR is defined as well-intended research, the first category, by definition, does not apply to DUR. The question is, therefore, under what conditions DUR-related actions constitute reckless or negligent wrongdoing. Reckless actions are those in which one knowingly but unintentionally puts others at risks in unjustified ways. Negligent actions are ones in which an agent acts without knowing that one’s actions unjustifiably put others at risk of harm but where the agent should know that they do (Douglas, 2003: 161; Tannenbaum, 2018: 127). Both categories might potentially apply to well-intended scientific research, and hence to DUR.
It might be argued that some DUR, for instance the H5N1 studies, is reckless. While scientists involved intended well, they knowingly created a “blueprint” of a dangerous virus that could enable a bad actor to cause serious harms. They exposed others to unreasonable risks not just because the risks were extensive, but more importantly, because they were unnecessary (Lipsitch and Inglesby, 2014). While there is controversy about how serious the danger from the engineered virus was, or about what the scientists knew about the risk (Imperiale et al., 2018), the important point is that recklessness, if established, would render the scientists blameworthy. To avoid acting recklessly one must refrain from unreasonable actions that one knows could cause unjustified harm to others.
Characterizing negligence requires a standard against which we can evaluate whether the agent should have known that an action could be harmful. The common standard used is reasonable foreseeability; that is, a standard set by what is expected of a reasonable person, one that treats others with proper regard, and spends sufficient time and energy in evaluating the possible consequences of one’s actions (Tannenbaum, 2018: 127). Thus, it might be suggested that we can judge whether scientists have moral responsibilities regarding DUR, by determining whether they can reasonably foresee possible misuses of their studies. Douglas (2003) argues that scientists are well placed to assess the benefits and risks of their studies. They know best what their studies could lead to in terms of benefits, and more crucially, of risks, including those of misuse. She thus proposes that scientists have a responsibility to conduct an RBA and act accordingly (Douglas, 2003: 62, 66). Kuhlau and colleagues (Kuhlau et al., 2008: 481–482) also argue that scientists have moral responsibilities regarding their research because they can reasonably foresee potential misuses and can assess the risks and benefits of their studies. They suggest that while scientists cannot prevent biosecurity risks from materializing, they can minimize their magnitude and likelihood (Kuhlau, 2008: 483). They thus claim that scientists have a duty to consider the negative implications of their research, and “to consider whether to refrain from publishing or sharing sensitive information” which could be misused (Kuhlau, 2008: 484). Note that this duty is supposedly placed on the scientists themselves, who must make their own decisions about publication or sharing information. While Kuhlau and colleagues are not entirely clear on how scientists ought to decide, they nonetheless point at scientists as responsible for making the decisions (Kuhlau, 2008). Douglas might remove some of this unclarity by suggesting that scientists should assess the risks and benefits of publication and act accordingly.
Thus, Douglas and Kuhlau et al. both argue that scientists bear responsibilities with regard to their research, specifically with regard to its potential to be misused by bad actors. While there are some differences between them, the important point is that both suggest that most, if not all, of the responsibilities for decisions regarding their research including its publication rest with scientists themselves. They alone should address the risks of misuse.
Is their position plausible? We start by examining a position that rejects this claim, and instead suggests that scientists’ moral responsibility is to generate valuable knowledge and that therefore they are permitted, indeed they ought, to ignore biosecurity risks of their research.
Ignoring the risks of misuse
Several authors have considered a position put forward by scientists throughout the years, namely, that scientists should focus on generating valuable knowledge and can permissibly ignore any risks of misuse of their studies. A stronger version suggests that they ought to ignore the risks of misuse (Douglas, 2003, 2014). If one of these claims is true, then the common argument is of course unsound. Whichever version of this claim one endorses, there are three ways to construe the argument for it (Sieghart et al., 1973).
One way is to suggest that because science generates knowledge that is intrinsically and/or instrumentally enormously valuable—knowledge that helps to improve people’s well being—there is an overriding obligation to generate such knowledge regardless of potential misuses. This version of the argument focuses on the overriding value of the knowledge produced by science. A second argument focuses on the fact that it is scientists’ role to produce valuable knowledge, couched in a role-morality framework. According to this argument, because it is scientists’ role to produce this (incredibly valuable) good, it is not their role to consider possible misuses of their science (Sieghart et al., 1973). To bolster these arguments, one could add that spending time considering possible misuses will hinder the production of the valuable knowledge that science produces. Scientists should therefore focus only on producing knowledge (Douglas, 2003; Douglas, 2014). A third argument is an argument from overdetermination: because scientists cannot prevent other scientists from pursuing lines of research, even if a particular scientist decides not to pursue a line of research because of its potential misuses, someone else is likely to pursue it. Or so it is claimed (Douglas, 2014; Sieghart et al., 1973). If that is so, we are faced with a case of over-determination: regardless of what one scientist chooses to do, the risky research would be done anyway. Accordingly, no one can be blamed for such an outcome. While this argument, on its own, only establishes that there is no point in avoiding to pursue lines of research because of the risk of misuse, when coupled with the claim about the value of scientific knowledge, it can support the claim that scientists may, indeed should, ignore risks of misuse of their research.
While we do not deny that scientific knowledge is both intrinsically and instrumentally valuable, we argue that these three arguments fail. It is true that scientific discoveries have helped save millions of lives, solved decades-old puzzles, and significantly enhanced our understanding of the universe. Nonetheless, it does not follow that scientists are permitted to overlook the risks of misuse of their research. The argument from the incredible value of scientific knowledge fails because the risks from science can also be very high, and can indeed be so significant that the value of the knowledge produced can be outweighed by these dangers. If it were true that the value of scientific knowledge was so great that it would outweigh all possible risks of misuse, then its value should also outweigh all, or at least most, other values; but then surely the way all known societies allocate resources to science is problematic, as they do not devote enough resources to science, and instead allocate resources to promote goals whose value is overridden by that of science. This claim is highly implausible, suggesting that scientific knowledge is one good among many, some perhaps being even more important goods (Sieghart et al., 1973).
The argument from role-morality is also implausible. Indeed, in virtue of their role as pursuers of scientific knowledge, it is sometimes primarily the role of scientists, and of no one other than scientists, to consider possible risks emerging from their research. For as Douglas (2003: 64–65) claims, there are historical examples from the study of nuclear physics and recombinant DNA that show that sometimes it is only scientists who can identify potential risks emerging from such research. In such cases, scientists have a duty to consider those risks, and at the very least alert others to them (which scientists have indeed done in these cases).
The argument from overdetermination is also weak. The claim that the policy of avoiding certain lines of research to prevent possible risks will never be effective, because there will always be other scientists who pursue similar lines of research, is an empirical claim that is not supported by the historical record (Sieghart et al., 1973). If a line of inquiry is widely recognized as problematic, scientists might collectively decide to refrain from pursuing it. Whether it would always be the case that some scientists would nonetheless pursue the widely-avoided line of research, when some of the most significant sources of motivation for research, namely, prestige, are unavailable, is highly questionable. Moreover, whether such scientific renegades would always succeed in their research without community support is no less questionable. Again, the study of recombinant DNA presents a case in point: scientists successfully imposed a moratorium on themselves until the risks of this kind of research were assessed and addressed.
Thus, arguments suggesting that scientists may ignore the risks of their research are rather weak. If this is so, then scientists should consider the risks of misuse of their research, as the common argument suggests. But how should scientists discharge their duty to address risk of misuse? In what follows we assess two suggested ways of doing so.
The precautionary principle
We shall first assess the claim that when scientists can reasonably foresee that their research could be misused to cause harm, they should adopt the precautionary principle to guide their actions (Kuhlau et al., 2011; Resnik, 2013). To assess this claim we should clarify the precautionary principle (PP) itself. While there are many formulations of the principle, for our purposes we consider formulations articulated specifically in the context of discussion of DUR (Resnik, 2013). The following are two formulations: When and where serious and credible concern exists that legitimately intended biological material, technology or knowledge in the life sciences pose threats of harm to human health and security, the scientific community is obliged to develop, implement and adhere to precautious measures to meet the concern. (Kuhlau’s 2011: 8)
And: [T]he basic idea of the precautionary principle is that we should take reasonable measures to avoid, minimize, or mitigate harms that are plausible and serious. (Resnik 2013: 28)
How should we understand these statements? There are a few questions we need to ask in order to make this principle operational, such that it could give guidance to scientists. Gardiner (2006) has listed some of these questions: How extensive should the harm be to trigger PP? Under what level of uncertainty would it be reasonable to use PP? What are the measures that can be regarded as precautionary?
Resnik and Kuhlau et al. seem to suggest an answer to the first question regarding the magnitude of the harm: Harms need to be serious. However, what counts as serious harm is debatable: is it risk of death and injury of a few people? hundreds of people? many more? If we use the H5N1 example, because the virus had a high mortality rate in its wild type, the fear was that if it gained the capacity to be transmissible by air between mammals, it could cause a mass pandemic. The New York Times even called it “A Doomsday Virus” (New York Times, 2012; Resnik, 2013). It would seem that the alleged risks of harm in the H5N1 case would meet the criterion of serious harm. 2 Even if this is so, it is still not entirely clear how much harm should trigger the precautionary principle; there are likely to be many cases in which such determination is going to be highly controversial, unless a clear threshold is determined.
The second question concerns the level of uncertainty under which it would be reasonable to use PP. Should PP be triggered by what Resnik (2013: 27) calls “. . .Chicken Little fantasies”? Both Resnik (2013: 27) and Kuhlau et al. (2011: 5) suggest a negative answer, referring to “credible” and “plausible” threats of harm. This formulation avoids the criticism that PP is problematic because it calls for precautionary measures against far-fetched risks with little evidence (Gardiner, 2006). But how do we determine that a risk of harm is credible and/or plausible? This will require data gathering to determine whether the probabilities of the risk are large enough. PP, under the understanding addressed here, calls for a serious assessment of the risks and determining how serious they are before deciding which measures to take. It might be, as Resnik and Kuhlau et al. (2011) suggest, that for PP one could delineate the risks without complete evidence, whereas RBA requires objective probabilities, and that if those are available, PP may no longer be relevant. Notwithstanding, the question of how much evidence one should gather to trigger PP still stands: what is sufficient evidence? Such vagueness does not give sufficient guidance for individual scientists on how to discharge their duties. Moreover, PP only requires assessing the risks, while ignoring the benefits of the suggested study (Gardiner, 2006), thereby weakening the plausibility of adopting PP in the first place.
Let us assume that these issues can be resolved; there remains a third question Gardiner raises, namely, what measures should be regarded as precautionary? Banning the study, doing the study without wide dissemination, publishing it in a redacted way, delaying its publication, etc., all seem possible candidates. There are many measures that could be regarded as precautionary that could be considered (Kuhlau, 2011; Resnik, 2013). The precautionary measures will depend on the credibility and the seriousness of the threat, and as Resnik (2013: 27) suggests, also on assessing their effectiveness. But here we see why the PP does not give sufficient guidance, since the magnitude and probabilities of the risks are not quantified in a precise way. Accordingly, the precautionary measures might be too weak if the threat is underestimated or too severe if overestimated. It would be rather speculative to decide on one precautionary measure rather than another. On what would the decision be based? Even the suggestion that decision on which precautionary measure to take should be based on effectiveness assessment seems problematic: how can that be determined if the risks and benefits are unclear?
A related concern pertains to determining whether a measure is indeed precautionary. The case of H5N1 illustrates this problem: there are risks associated with studying the virus, and risks from not studying it, namely that it might naturally evolve to gain the capacity to be transmitted via the air between mammals. PP might then be unhelpful as it won’t be clear what should be done as both studying the virus, and not studying it, involve risks. Is banning the study, redacting it, delaying it, etc., a precautionary measure given that all these measures might prevent some risks, but introduce others?
Some writers (John, 2010; Kuhlau et al. 2011) suggest that PP is more of an inspirational principle that directs the attention to the risks associated with one’s actions: “The strength of the PP, in our view, is that it inspires to reflect upon science’s role in society and how scientific developments should be directed and justified by informed choices and decisions” (Kuhlau et al. 2011: 6). Granted, scientists should reflect on science’s role and effects on society. But the question is what form this reflection should take. PP does not seem to suggest a clear enough answer.
Moreover, even if PP can be formulated more precisely, if the purpose of requiring scientists to abide by it is to prevent serious, credible risk, we must ask whether life scientists are in a good position to assess whether the threat of misuse of DUR crosses the relevant thresholds. As we have argued, a plausible formulation of PP stipulates that the risks must be plausible and serious, and not far-fetched. While the threshold for triggering PP is not entirely clear, let us assume that some evidentiary threshold, and some level of seriousness must be met, to regard a threat both plausible and serious. Still, it is not clear how scientists are to assess whether the risks of misuse cross this threshold. The risks of misuse by malevolent actors will not be readily accessible to scientists. This kind of information is usually classified, and most often, it would not be scientists, but rather, security personnel, who have access to it (Selgelid, 2007, 2009). If this is so, how can scientists decide whether the risks of misuse are far-fetched or credible? Scientists are not likely to be able to make those assessments in a reasonable and accurate way. They can have educated guesses, but these would be too speculative to justify precautionary measures. If the actual risks are small but scientists overestimate them, they might employ precautionary measure that are much too severe for the situation, thereby both harming the accumulation of valuable knowledge, and in some case, preventing the development of other precautionary measures which depend on such knowledge (as in the case of the H5N1 virus). On the other hand, if the risks are high but scientists underestimate them, they might use precautionary measures that do not sufficiently address the risks.
In sum, it seems that for a variety of reasons, scientists would not be able to effectively discharge their responsibilities to address the risks of misuse of their studies by adopting PP. PP is vague, and does not single-out concrete measures; perhaps most importantly, because scientists would often be unable to assess the risks of misuse in an accurate way, turning to PP would be ineffective.
Risk-benefit analysis
An alternative way for scientists to discharge their responsibilities to address potential misuse of their studies is by adopting an RBA. The outcome of the analysis is what they ought to do (Resnik, 2013; Selgelid, 2007). An RBA seems overall a more reasonable way for scientists to discharge their duties. Unlike PP, RBAs can generate more specific guidance, and seem more plausible as they also consider benefits of actions that PP ignores. Additionally, RBAs are clearer about the significance of evidence about risks and benefits and their magnitudes: possible magnitudes should be assigned weights in accordance with the probabilities of them materializing, where those probabilities depend on the evidence.
Nonetheless, an RBA is not without its own problems. Resnik (2013: 27) opposes using RBA in contexts of dual use research because sometimes decisions must be undertaken when there is insufficient data about the magnitude and probabilities of relevant risks and benefits. In such circumstances, he argues, an RBA is inappropriate. Gardiner (2006) also notes that decision makers cannot always accurately assess risks and benefits of technologies that have impact on the environment, given the complexity of environmental effects, and notes that many RBA-based environmental policies led to disastrous effects. Moreover, he notes that delineating risks is difficult, both because of poverty of reliable data available, and because the identification of risks, he claims, is not “purely technical and scientific,” often involving various unsupported assumptions (Gardiner, 2006: 36). However, we shall not delve into these problems, assuming that some of them can be addressed, while the others might be unavoidable on any approach to DUR.
But even if an RBA is the way to address possible risks of DUR, a different issue arises: whether scientists are in a good position to discharge their duties by conducting an RBA themselves? Scientists, as Douglas (2003: 62) suggests, work in close-knit communities, and are usually very knowledgeable about the mutual effects of their own work and of that of others. They are also, overall, well situated to assess the potential benefits of their work and often can predict, which findings are likely to be further developed to generate benefits for society. They might even have a rough timeline, based on previous cases, on when the benefits will accrue. They are also well situated to assess the magnitude of those benefits. Thus, assessing the magnitude and probability of benefits of research can be left to scientists.
But what about the risks of misuse? As suggested earlier, scientists’ ability to assess risks of misuse is rather limited. As Evans et al. suggest (2022: 73): “some of these risks relate to national security issues that fall outside the scientist’s sphere of expertise, and accurate risk assessment sometimes requires access to classified information that is unavailable to ordinary scientists.” To clarify, scientists could potentially assess the magnitude of the risks of misuse; they could draw on data that estimates the extent of the harms of a particular pathogen, say an enhanced H5N1. This would not be an easy task, but scientists seem well placed to make such an assessment. Yet, scientists are not well placed to assess the probabilities of misuse. Who is likely to misuse their findings? This is a question to which life scientists are not likely to have an answer. They are also not likely to have accurate knowledge of who possesses the essential capacities to misuse the findings. More importantly, they would not be able to know whether a malevolent agent has any serious intentions to abuse the findings. Such information is not usually publicly available and is likely to be classified (Evans et al., 2022). But an RBA requires an assessment not only of magnitudes of possible risks, but also of the probabilities of such possibilities.
Scientists might use publicly available sources about rogue agents to evaluate the relevant probabilities, but this will not allow them to form accurate assessments of the likelihood of the potential harms. Accordingly, scientists would not be able to effectively discharge their duties. Under these conditions, pursuing an RBA is likely to generate deeply flawed assessments. Without an ability to generate a rigorous RBA, scientists are more likely to generate assessments that either underestimate the risk probabilities or overestimate them. If they underestimate them, this will lead them to go forward with studies that require more secure conditions. Overestimating of the risks would lead to self-censorship, delay or any other overly restrictive measure that will harm the accumulation of valuable knowledge.
We thus conclude that scientists should not undertake an RBA by themselves to discharge their responsibilities. Doing so would not help them achieve the goals of RBAs: that of identifying courses of action that minimize expected risks and maximize expected benefits. Indeed, if they conduct an RBA on their own, there is a danger that the risks of misuse would not be addressed sufficiently. In the next section, we suggest that given scientists’ inability to assess the risks of misuse, the responsibility for such an assessment should rest with a different agent, one that is in a better position to accurately assess them.
Sharing responsibilities
Our discussion suggests that life scientists, while they are well-placed to foresee what kinds of risks and benefits might emerge from their DUR, are not well-placed to weigh these two kinds of effects against each other, because they are unable to assess the probabilities of possible misuse. Thus, while they ought to consider both kinds of possible effects of DUR, they are not in a position to effectively do so on their own. Rather, they should discharge their duties by participating in, contributing to, and abiding by, a process of decision-making based on a shared analysis of risks and benefits, which might sometimes involve submitting their studies, and their assessment of possible risks and benefits, to further review by an institutional agent that can more effectively assess probabilities of misuse (Kolstoe, 2021).
To be sure, there are existing, indeed, quite complicated arrangements that assess, oversee, regulate, and approve scientific research. These arrangements can be at the global, state, and institutional level. A useful way to think about these arrangements is through a distinction drawn by Kolstoe and Pugh (2023) between research integrity, research ethics, and research governance. Applying this distinction could help see more clearly how responsibilities for addressing the risks of DUR should be shared.
Research integrity refers to moral and epistemic commitments of researchers, succinctly labeled “the virtuous researcher” (Kolstoe and Pugh, 2023: 5). Universities devote educational resources to instill in researchers a commitment to rigor, honesty, as well as other virtues and responsibilities including those that relate to research that has the potential to be misused. Scientific societies such as the American Society for Microbiology (2021) have included in their code of conduct a statement about the commitment of their members to warn of potential misuse of scientific findings. The scientific community has been devoting much attention to creating a “culture of responsibility” regarding the risks of DUR and the responsibilities of scientists to reflect on the potential that their work could be abused to cause harm (National Science Advisory Board for Biosecurity, 2011). Importantly, our argument suggests that these efforts are justified but need to be limited: educational and awareness-raising programs ought to advise scientists that they ought to assess whether their work has dual use potential and alert others if they do, but they should not decide on their own how to proceed, since they lack crucial information about the likelihood of misuse. The virtuous scientist, thus, is one who reflects on her work, and if risks of misuse are apparent, she should submit her work for further assessment to an ethics committee that has the required resources to generate a comprehensive assessment.
As for research governance and research ethics, though they could at times overlap, Kolstoe and Pugh rightly distinguish between them. Research ethics, they suggest, addresses the question “of which moral principles a researcher with integrity must have a commitment towards” (2023, 6). Research ethics addresses normative questions about the permissibility of a particular study or of a general field as is commonly done in research institutions, hospitals, pharmaceutical companies, and also at the level of global organizations such as the WHO (Kolstoe and Puch, 2023: 7; World Health Organization, 2022).
Research governance, in contrast, is an extensive set of formal arrangements that oversee and regulate scientific research. It is composed of laws, policies, protocols, enforcement and assurance mechanisms, and more (Kolstoe and Pugh, 2023: 9–10). In other words, research ethics involves assessments done usually in ethics committees that are not necessarily restricted by an existing legal framework (in the DUR case, for many years in many countries such a framework did not exist), while research governance pertains to formal mechanisms under which a scientific community operates. Applying these concepts to the question at hand brings out gaps that ought to be addressed.
First, regarding research ethics: ethics committees in many institutions participate in reviewing research including DUR. Their discussions are important for two reasons: (1) they inform policy development at the institutional level; and (2) they provide crucial information about the permissibility of a particular study as well as guidance on how they should be conducted. These committees are an important part of an overall institutional design aimed at assuring that research done is ethical. However, it is important to note that these committees suffer from an important limitation: given their current composition, ethics committees are likely to suffer from the same epistemic limitations from which scientists suffer. In particular, they often do not have access to information about the likelihood of misuse of DUR. It is an open question whether existing ethics committees can be restructured in a way that addresses this problem, or whether a new structure is required. If they are going to continue to serve their important role of providing ethics guidance, their membership must include experts that have access to information about the probabilities of misuse. Nonetheless, their participation in elucidating the risks of studies or of a field of study is important and can contribute when regulatory mechanisms are lacking.
Research governance of DUR has been developing at least since 2012. Consequent to the H5N1 controversy, countries and institutions around the world have been developing review mechanisms and policies that aim to address the risks of misuse of openly conducted science (Bureau Biosecurity, 2023; Science and Technology Policy Office, 2015). However, these policies suffer from the same shortcoming described above: they do not include provisions that would enable the sharing of classified information about the likelihood of misuse, information needed for generating a rigorous RBA, and the implementation of measures that would minimize the risks of misuse. For example, in the case of the US, under the most recent policy “Institutional Review Entities” (IRE) were established to assess the risks of misuse (Science and Technology Policy Office, 2015). 3 However even IREs suffer from the epistemic limitations noted above because their composition lacks personnel that have access to information about the probabilities of misuse. Thus, in both research ethics and research governance an important gap exists, a gap that must be addressed.
Note that the need to consider risks of misuse of DUR, and the need to involve in the discussion agents with appropriate knowledge applies to different stages of the research pipeline, from the stage of submitting a research proposal to a funding body, through submitting to an ethics committee within a research institution to determine whether and how the work should be undertaken, through the submission to a journal for publication. We do not suggest that the same kind of oversight may be appropriate to all these different stages.
To address the epistemic limitations of ethics committees and of formal review agents such as IREs, Selgelid (2009) and Evans et al. (2022) argue that the state, and specifically its security experts, could and should share the responsibilities for determining how to address biosecurity risks of misuse. The reason for involving security experts is that they are more likely to have access to classified information needed for a rigorous RBA that can then be used to instruct scientists how to proceed. 4 Accordingly, whether existing ethics committees are employed or IREs, if they are to generate reliable assessments of the risks of misuse, they need to include those that have access to information about the likelihood of misuse.
Although this proposal seems to address the problem of leaving scientists to assess their own studies, it is not without problems. As Selgelid argues (2009: 722), involving security experts might lead to decisions that overestimate the biosecurity risks as these experts are likely to be biased given their expertise. Moreover, he suggests that security experts are not likely to be well placed to appreciate the scientific importance of the studies. Finally, he suggests that involving security experts might raise the fear among scientists that involving government officials in assessing academic studies would lead to increased restrictions on academic freedom.
Each of these concerns could potentially be mitigated to some extent. For instance, if the assessment is done by an ethics committee in which scientists and security experts together make the assessment, biases of both sides could be offset. A committee that involves both kinds of experts and perhaps additional expertise, such as risk communication experts, could produce an assessment that takes a more comprehensive account of the risks and benefits of DUR.
Yet a further worry arises, namely the basis upon which members of such committee should accept the assessment offered by security experts. As suggested, scientists do not have access to information about probabilities of misuse, which certain security experts do have. Yet, security experts cannot usually share the information on the basis of which they arrived at assessment of probabilities of misuse. Under such conditions, the committee would not have a way to validate that the measures they are asked to recommend are justified. One could suggest that those committee members that represent the scientific community should trust the security experts and abide by their recommendations. Yet why should they trust security experts’ advice given the fears raised by Selgelid (2009)?
One way to resolve this concern is through establishing a mechanism in which academic leaders or other trusted parties have access to the information used for the assessments, and they in turn could convey their agreement or disagreement to the committee. In other words, if the government wishes to maintain scientists’ trust in its recommendations, it ought to allow for more transparency with selected academic representatives that will serve as a trusted party, to connect between various experts. If such a mechanism can be worked out, the worry that scientists will not trust security experts’ analysis can be addressed to some extent.
Conclusion
Dual use research presents an ethical challenge to the view that scientists should pursue studies based on their scientific importance while disregarding any risks that might arise from their research. We have argued here that notwithstanding scientific importance, scientists cannot ignore the risks of misuse, if they can reasonably foresee them. However, the fact that they cannot ignore these risks does not settle the question what they ought to do regarding them. We suggested that mechanisms should be put in place, whereby scientists submit their studies for a risk benefit assessment that will be undertaken by a committee composed of at least two kinds of experts: (1) scientists that are well positioned to appreciate the benefits of the research, and the kind of risks the research might involve; and (2) security experts that can provide an assessment of the probability of misuse. These two types of expertise are required for a rigorous risk assessment. We have also suggested that given the fact that the risk assessment is based on classified information, a mechanism by which scientists can trust that security experts’ assessment is unbiased should be created. Finally, scientists ought to then implement the measures that the committee recommends. This could range from pursuing the research as planned to banning the research. By implementing the suggested measures, scientists would thus meet their moral responsibilities.
Footnotes
Acknowledgements
We would like to thank the participants in talks given at the BMBF-Summer School on Dual Use and Misuse of Research Results, Sapir College Conference, Bar Ilan University Department of Philosophy seminar and Science in the Risk Society International Conference, Jerusalem for their helpful comments.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethical approval
Not applicable.
