Abstract
Dual-use research of concern (DURC) is research that can benefit humanity but poses substantial potential risk as well. There is scholarly agreement that each stakeholder has ethical responsibilities vis-à-vis DURC, from researchers to the broader international community, and that ethical DURC requires each stakeholder fulfill their ethical responsibilities. In this paper, we argue that few if any stakeholders are presently fulfilling their ethical responsibilities; therefore, present-day DURC may be unethical. This conclusion does not rest on the assumption that any particular stakeholder is acting with malicious intent. Rather, we contend that the unprecedented pace of scientific and technological advancement has significantly outstripped the development of corresponding ethical oversight mechanisms, regulatory frameworks, and systems of public accountability. As a result, the institutional and normative infrastructure required to ensure that DURC is conducted in a responsible and ethically defensible manner has failed to keep pace.
Keywords
Introduction
There is a great deal of research occurring today across multiple rapidly advancing fields, including artificial intelligence, synthetic biology, virology, gene-editing, and neurotechnology. Much of this research holds immense promise for improving human life, but it also carries the potential for significant harm to the public, which is why such research is called “dual use research of concern” or DURC (Bobier and Hurst, 2024; National Academies of Sciences, Engineering, and Medicine, 2017; World Health Organization, 2022). Examples include reverse engineering the smallpox or avian influenza viruses to study their pathogenesis (Rosengard et al., 2002; Sutton et al., 2014), using artificial intelligence to generate de novo toxins (Urbina et al., 2022), and creating autonomous weapons systems (Kim and Joe, 2025; Philippi, 2025). As these examples suggest, the research can benefit humanity (e.g. helping researchers design novel vaccines, anticipate de novo toxins, or minimize human casualties in war) but poses substantial potential risk: the research, if misapplied, can result in serious danger to researchers, their local community, nation-states, and the world writ large.
The ethical governance of DURC has been the subject of sustained scholarly attention for some time (Resnik, 2024). Within this discourse, there is broad consensus that ethical oversight of DURC requires a multi-tiered framework in which responsibilities are distributed across a range of actors or stakeholders (e.g. Bloomfield et al., 2024; Grinbaum and Adomaitis, 2024; Miller and Selgelid, 2007; Pannu et al., 2024; Selgelid, 2009; Smith and Sandbrink, 2022). As Miller observes, “The phenomenon of dual use research and technology calls for important ethical decision making by actors (with duties and responsibilities) at many levels” (Miller, 2018: 14). Similarly, Lev and Keren (2025) maintain that ethical DURC governance demands “responsibilities are divided” (p. 267), and Salloch (2018) emphasizes that the duty to mitigate risk is “spread between a variety of individual and collective actors” who hold influence over DURC (p. 2). Although debates persist regarding the specific content and scope of each actor’s duties, it is widely recognized that researchers, funders, employers, publishers, professional societies, national governments, and the international community all bear ethical responsibilities in relation to the conduct, dissemination, and regulation of DURC.
Despite growing literature on what counts as DURC and the ethical responsibilities of various actors, relatively little attention has been given to the ethical status of current DURC practices. This represents a notable gap in the discourse. While much has been written about what ethical DURC ought to entail or what changes must be made to render DURC’s risk profile acceptable, there is a striking absence of sustained analysis assessing whether present-day DURC meets these normative criteria. For example, Kuhlau et al. (2008) assert that researchers bear an ethical obligation to consider the potential for serious negative consequences arising from their work. They view this obligation as an integral of ethical DURC, one that cannot be easily overridden or dismissed. Yet their discussion does not address whether, in practice, researchers fulfill this obligation, thereby leaving the ethical status of ongoing DURC unclear. Drew and Mueller-Doblies (2017: 5992) argue that a “structured risk assessment process is needed” but presently lacking, which suggests that DURC is being conducted with improper risk mitigation. Miller (2018) notes that scientists are very often untrained in biosecurity, lack expertise for assessing DURC, and face systematic barriers to information integral to assessing risk; yet no mention is made whether scientists conducting DURC are doing so unethically. Resnik (2010) argues that proposed DURC should be evaluated by ethics review committees with specialized expertise in dual-use issues; however, he acknowledges that such institutional mechanisms are presently lacking and advocates for their establishment, implicitly suggesting that much of today’s DURC may be proceeding without adequate ethical oversight.
This paper addresses this lacuna by arguing that present-day DURC, absent mechanisms and frameworks to mitigate bad effects, may be unethical. The argument does not depend on whether any particular DURC is, in its current form, ethically untenable; on this we remain neutral for the sake of the argument. Further, this conclusion does not rest on the assumption that any particular stakeholder is acting with malicious intent. Rather, we contend that the unprecedented pace of scientific and technological advancement has significantly outstripped the development of corresponding ethical oversight mechanisms, regulatory frameworks, and systems of public accountability, all of which are integral to ethical DURC. As a result, the institutional and normative infrastructure required to ensure that DURC is conducted in a responsible and ethically defensible manner has failed to keep pace.
Ethical DURC
Consider a scientist attempting to develop a simpler method for splitting atoms or to synthesize a novel neurotoxin in their garage. This scenario may be fanciful, but it is deeply troubling if it were to happen, unlike—say—if they were researching the cause of cancer in amphibians or non-dangerous chemical reactions. The concern arises from the fact that, even with the best of intentions, researchers are fallible and accidents happen. In this case, the individual would be engaging in research with the potential to seriously harm or kill large numbers of people. The fact that one person is engaging in such dangerous-to-others research shows that not all scientific research is ethically benign: research that carries significant risks to others demands heightened ethical scrutiny regarding whether it should proceed and, if so, under what conditions. This suggests not that DURC is intrinsically morally impermissible, but rather that it must be conducted with particular responsibility and oversight.
What is minimally required for DURC to be ethical? This is far from a trivial question, as evidenced by the extensive and expanding body of literature devoted to the subject. Ethical DURC is not merely a matter of avoiding or minimizing harm; rather, it entails doing so in a manner that preserves the epistemic and societal value of scientific inquiry. Achieving this delicate balance between scientific advancement and the mitigation of serious risk requires the implementation of robust safeguards commensurate with the potential harms involved (e.g. Bloomfield et al., 2024; Dubov, 2014; Miller and Selgelid, 2007; Pannu et al., 2024; Resnik, 2021). Given the fallibility of individual researchers and the inherently collective nature of scientific research, ethical responsibility must be distributed across a range of actors: “Contemporary research environments put the different forms of responsibility for preparing and conducting research projects onto many shoulders” (Heinrichs and Ergin Aslan, 2025: 250). While researchers undoubtedly bear a duty to anticipate and mitigate foreseeable risks, so too do employers (e.g. universities, industry), funders, professional associations, governmental bodies, and the international community. That ethical obligations extend across this spectrum of stakeholders is grounded in the severity of the potential harms at stake and the practical limitations and fallibility of each stakeholder. Research on a novel toxin, for example, may carry the risk of catalyzing a global pandemic with catastrophic consequences, and so, risks of this magnitude demand careful scrutiny and accountability from all parties involved in the research.
To help us imagine what ethical DURC might look like in practice, consider the following hypothetical scenario.
Dr. Ramirez is a conscientious virologist-immunologist with a sustained commitment to responsible research practices. For the past three years, they have completed annual training, mandated by their employer, specifically designed to familiarize researchers with current DURC guidelines and ethical considerations. Interested in reverse-engineering the H5N1 (avian influenza) virus to enhance understanding of vaccination pathways, Dr. Ramirez begins by consulting senior colleagues with expertise in virology and bioethics to evaluate both the scientific feasibility and potential risks associated with their proposed research. Their colleagues affirm that, under specific conditions and safeguards, such research could meet ethical standards. With this preliminary peer validation, Dr. Ramirez develops a detailed research protocol and submits it to the Institutional Biosafety Committee (IBC). The IBC, which includes members with relevant scientific and ethical expertise, conducts a thorough evaluation of the proposal, assessing both its scientific merit and its risk profile. They invite an external subject matter expert to also review the protocol. Upon determining that the proposal satisfies institutional DURC criteria, the IBC grants approval. Dr. Ramirez then applies for funding from a federal government agency with a clearly articulated DURC policy. The agency subjects their proposal to an additional layer of review to ensure compliance with national policies governing DURC, including risk-benefit assessment and biosecurity provisions. Upon receiving the grant, Dr. Ramirez proceeds with the research, adhering rigorously to biosafety and biosecurity protocols established by their employer and professional society—protocols that, notably, exceed the minimum standards set by governmental and international guidelines. Upon completion of their study, Dr. Ramirez prepares a manuscript that carefully discusses both the scientific findings and the potential dual-use implications. They submit it to a leading peer-reviewed journal that enforces a stringent DURC review policy. As part of its editorial process, the journal initiates an additional review of the manuscript, which may involve consultation with relevant governmental agencies and applicable subject matter experts to determine whether publication poses any undue risk.
This vignette illustrates a so-called “Swiss cheese model” of DURC oversight, a comprehensive exercise of due diligence by all relevant stakeholders in the oversight of DURC (Bloomfield et al., 2024; Pannu et al., 2024). As no singular layer of checks is likely to be substantial enough in itself, given the risk, adding additional layers of checks (or additional layers of Swiss cheese) decreases the holes: ethical DURC requires, in the words of Pannu et al. “proactive processes that engage diverse — and independent — experts.” (Pannu et al., 2024: 811). In our example, Dr. Ramirez demonstrates a high level of ethical awareness: they are well-intentioned, anticipate the potential dangers posed by their research, actively consult with colleagues, possess a thorough understanding of DURC policies, undergo regular DURC-specific training, and adhere to relevant guidelines at each stage of the research process. Their employer has established an Institutional Biosafety Committee and mandates compliance with institutional safety and oversight protocols specifically tailored to dual-use risks. The funding agency rigorously evaluates the proposal through a DURC-sensitive review process before awarding support. Their national government has implemented DURC policies that federally funded researchers are required to follow, ensuring alignment with national security and public safety priorities. Likewise, their professional society has issued clear DURC guidance and provides continuing education to its members on emerging dual-use challenges. The journal to which Dr. Ramirez submits their manuscript enforces additional layers of review for DURC-related submissions, including, when necessary, consultation with regulatory authorities and subject matter experts. Finally, the international community has articulated a set of DURC governance principles and frameworks that inform national and institutional policies, further supporting coordinated global oversight.
An important corollary to the view that ethical DURC requires a tiered approach is that ethical DURC requires all actors uphold their individual responsibilities. The tiered model distributes ethical and regulatory obligations across multiple stakeholders, and this is for two reasons: first, no single entity has the capacity, authority, or expertise to identify, mitigate, and manage the potential serious risks to public health associated with DURC on their own; second, each stakeholder is fallible, subject to making cognitive and procedural errors, and each may be subject to certain pressures vis-à-vis DURC (e.g. researchers have strong incentives to publish novel results and secure funding; biotechnology companies have strong incentives to bring products to market; funders have incentives to delegate DURC oversight to others; and so on). However, distributing responsibilities does not dilute accountability. Rather, it creates interdependence: the system’s ethical adequacy is contingent on each actor fulfilling their role. If even one stakeholder fails to uphold their responsibilities (e.g. a funder does not screen for DURC risks, or a publisher fails to flag dangerous content), then significant risks may go unrecognized or unmanaged, undermining the entire framework. The ethical permissibility of DURC cannot be assured unless each tier operates responsibly and in coordination with the others.
To appreciate the necessity of interlocking responsibilities, consider the following hypothetical scenario: Dr. Lin, a virologist at a major research university, designs a study to enhance the transmissibility of a novel avian influenza strain in mammals (in part) by using a widely available generative artificial intelligence model. They are interested in understanding how researchers might anticipate and respond to potential influenza evolution. Aware of the dual-use implications, they submit their proposal to the university’s IBC, highlighting the potential risks and proposing containment measures. However, the IBC, facing time pressures and lacking DURC-specific training, approves the study without consulting any external subject matter experts. Dr. Lin takes the approval as evidence that their research is fine to proceed, perhaps thinking that maybe they were overthinking the concern, and the findings are later submitted to a prominent journal, which publishes the work without flagging DURC concerns. Months later, the published methods are cited in an online forum, sparking widespread concern that the results may be used by malicious actors to generate a bioweapon: given the publication data and the availability of the generative artificial intelligence model, the blueprint is there for all to see.
This vignette draws on two illustrative cases: a 2001 study demonstrating that “standard” and “quite simple” laboratory techniques could be used to bioengineer a far more virulent strain of mousepox (Jackson et al., 2001), and a 2022 study showing that a machine-learning model could be repurposed to design de novo molecules with both biological activity and high toxicity (Urbina et al., 2022). Other worrying publications include research describing methods for genetically modifying H5N1 to enable airborne transmission via respiratory droplets (Russell et al., 2012), the creation of a chimeric coronavirus (Becker et al., 2008), and the open dissemination of a model capable of producing Escherichia coli β-lactamase, potentially facilitating the generation of drug-resistant E. coli (Biswas et al., 2021). Importantly for present purposes, however, this vignette illustrates how a failure by the IBC can compromise the entire ethical framework, despite the researcher’s good intentions and procedural compliance. If the journal had flagged DURC concerns, it might have prevented publication of potentially dangerous findings, but it would not have prevented the creation of the knowledge or the risks associated with it: oversight came after the fact, when fewer options are available to mitigate harm.
In sum, ethical DURC is not merely about distributing responsibilities; it is about ensuring that those responsibilities are actively and effectively carried out by all stakeholders involved. Given DURC’s potential for serious risk to public health, acceptable DURC requires risk mitigation at every level; a single lapse at any level can compromise biosafety or biosecurity with potentially serious consequences. This is not setting too high a standard for research but the proper response to the potential risk.
DURC reality
The case of Dr. Ramirez is a useful fiction for thinking about ethical DURC. The problem, though, is that it is just that, a fiction: there is compelling reason to think that (i) not all of the actors with DURC responsibilities are fulfilling their responsibilities and (ii) consensus on what constitutes DURC is lacking.
First, we do not believe that all of the actors with DURC responsibilities are fulfilling their responsibilities. Indeed, there is compelling reason to think most actors are falling short. This is an empirical claim, but it is one for which there is compelling evidence that in practice few, if any, stakeholder groups are consistently meeting their ethical obligations.
Researchers: There is, in the words of Krauel and Frewer, a “lack of awareness for the topic of DURC within the medical research community” (Krauel and Frewer, 2025: 341). While life scientists are more likely to be aware of DURC, research shows that few are trained on biosafety or think about their own research in such terms (Engel-Glatter and Ienca, 2018; Oladimeji et al., 2019). Junior scientists, especially, remain “unaware of their responsibilities, or even of the subject itself” (Drew and Mueller-Doblies, 2017: 5993). The result is that most researchers fall short of expectations of biosafety (Greene et al., 2025).
Journals: Research shows a dearth of DURC guidance across journals and publishers (Resnik et al., 2011), especially in the domain of artificial intelligence and medical research (Hurst and Bobier, 2025), as well as a lack of DURC training for editors of journals (Patrone et al., 2012). “Very few journals,” Casadevall et al. (2015: 2) observe, “have editors who are experienced with DURC-related issues.”
Universities: While universities have committees in place to evaluate, review, approve, and oversee research involving animals and human subjects, very few have committees in place to evaluate potential DURC and existing committee structures are unsuited to the task (Casadevall et al., 2014; Resnik, 2010). Mandatory responsible conduct of research training required by universities does not include biosafety or biosecurity training (Resnik, 2024). The result is that few universities adequately train researchers on biosafety (National Academies of Sciences, Engineering, and Medicine, 2017).
Professional societies: There is no professional accrediting organization for laboratory biosafety and biosecurity (Resnik, 2024), and certain disciplines have yet to issue DURC guidance (Bobier et al., 2025). Self-regulation within certain disciplines such as artificial intelligence is non-existent (Grinbaum and Adomaitis, 2024).
Governments: Research shows that only a few governments have DURC policies or recommendations (Magdalena Guraiib et al., 2024); those that have DURC policies tend to be limited to specific agents or toxins (Resnik, 2024). In the United States, DURC policy tends to apply only to government funded DURC, not to DURC funded by private businesses (Sharma et al., 2025), and is limited in other ways (Epstein, 2025). Still, the Trump Administration has rescinded DURC policy put in place by the Biden Administration (Gillum, 2025). Many of the agents and toxins that classify as DURC are highly classified, rendering it difficult to impossible for researchers, universities, and journals to assess DURC (Casadevall et al., 2014). Koblentz and Casagrande conclude that “biorisk management policies in the United States, and elsewhere, are failing to keep up” (Koblentz and Casagrande, 2024: 208).
International Community: Risk mitigation at the international level is a complex collective action problem, and there are clear shortcomings in international governance (Millet, 2017). An example of the international community working together to mitigate risks of chemical and biological weapons is the Biological Weapons Convention (BWC), which is underfunded, under-staffed, not signed onto by all countries, and unsuccessful in preventing chemical and biological weapons development (Johnson, 2024). Another challenge is that a fair number of countries lack DURC policies altogether, and among countries that have DURC policies, there are clear differences in capacity (Caballero-Anthony et al., 2025).
The preceding survey of published research is limited because there is a dearth of research on the scope of the various stakeholders’ education, training, and preparation vis-à-vis DURC. More research is necessary, but what we have presented is enough to believe that the various DURC actors are failing to meet their ethical responsibilities. Indeed, we are not aware of any research that is positive of the current state of DURC regulation, oversight, education, and training.
In light of this, it is not surprising that we find pronounced disagreement in regard to what constitutes DURC, which constitutes our second point here. As Resnik (2024: 150) writes, “Because good evidence is lacking, experts often disagree significantly on the risks and benefits of dangerous biological research. For example, experts disagreed by several orders of magnitude concerning the biosafety risks of the controversial H5N1 experiments reviewed by NSABB in 2011.” This real-world uncertainty contrasts sharply with our fictional scenario involving Dr. Ramirez, in which there is complete agreement that their research constitutes DURC. In practice, such consensus can be elusive at every level, even though there is agreement that DURC research is taking place (Moritz, 2022). This disagreement may stem from a lack of good evidence, as Resnik hypothesizes, or from other factors: insufficient education on DURC risks, researcher bias, the inability to predict malicious use, or the theoretical nature of some threats. For instance, the rise of in silico research–especially with advances in artificial intelligence and machine learning—presents new, hard-to-quantify biosecurity and public health risks, though a full analysis is outside the immediate scope of this paper (see Brenneis, 2025).
Nonetheless, if our analysis is correct that (i) not all of the actors with DURC responsibilities are fulfilling their responsibilities and (ii) consensus on what constitutes DURC is lacking, then this implies that DURC is proceeding without sufficient ethical oversight or protective measures (To be clear, point (ii) is compatible with the recognition that DURC is taking place, which scholars widely accept; they may disagree whether a specific instance of research counts as DURC, not that DURC is taking place.). As a result, the risk of harm to the public may be unacceptably high, undermining the ethical justification for conducting such research.
Further consideration
We do not believe that the claim that present-day DURC may fail to meet ethical standards is especially controversial. On the contrary, it appears to be implicit in much of the existing literature, which frequently emphasizes the barriers to effective DURC risk mitigation (Morse, 2025; Wang and Zhai, 2024). Indeed, the prominence of proposals for what should be implemented in order for DURC to be acceptable suggests an implicit acknowledgment that current DURC practices fall short. We offer four recent examples:
The last line of Lev and Keren’s article states, “By implementing the suggested measures, scientists would thus meet their moral responsibilities” (Lev and Keren, 2025: 282). This entails, and clearly so, that scientists engaged in DURC today are not meeting their moral responsibilities until Lev and Keren’s suggested measures are implemented.
Koblentz and Casagrande (2024) criticize United States DURC-related policy, highlighting its limitations. They argue that dual use research oversight should be reformed and that the United States “should adopt a comprehensive, national oversight system for laboratory biosafety, field biosafety, laboratory biosecurity, and dual-use research” (Koblentz and Casagrande, 2024: 206). The lack of appropriate oversight suggests that ongoing DURC constitutes a substantial and unsatisfactorily mitigated risk.
Heinrichs and Ergin Aslan (2025: 262) argue for strengthening the obligation of researchers to be informed of “possible dangers of misuse and common means to mitigate them.” Yet they conclude by acknowledging that “there are significant hurdles in fulfilling such a duty,” which again underscores the point that current DURC practices fall short of ethical standards.
Gillum and colleagues write that the “current fragmented regulatory landscape needs to be refocused to address the. . .risks associated with accidental, inadvertent, and deliberate biological incidents” and they propose the establishment of an independent government agency, an agency that does not currently exist but would mitigate risk to an acceptable level (Gillum et al., 2024: 1).
Other examples that discuss limitations of DURC oversight and the need for reform are readily available (e.g. Krauel and Frewer, 2025; Philippi, 2025), but our point is simply this: when scholars advocate for specific reforms to reduce DURC risks to an acceptable level, they implicitly acknowledge that current DURC practices pose an unacceptably high level of risk. But research that imposes potentially high risk to the public without adequate safeguards or justification fails to meet basic ethical standards. Therefore, present-day DURC, as it is commonly practiced, may not be ethically acceptable.
To be clear, we are not claiming that the failure to fulfill ethical responsibilities vis-à-vis DURC is necessarily intentional or malicious; rather, it reflects the reality that the pace of scientific and technological advancement has far outstripped the development of ethical oversight mechanisms, regulatory safeguards, and public accountability systems (Brenneis, 2025; Dubov, 2014; Krauel and Frewer, 2025). In other words, the ethical infrastructure necessary to ensure that DURC is conducted responsibly has failed to keep up. Moreover, we are not arguing that, should each stakeholder meet their ethical responsibilities, this will result in a failsafe system; this would be an example of the failsafe fallacy–that a system is inherently safe due to multiple fail-safes. Yet, the reality, which we acknowledge, is that multiple checks create friction to prevent the bad effects of DURC, though no system is likely to yield 100% risk-free DURC. Finally, we are not arguing that DURC is inherently immoral or unethical. We agree that DURC can be ethically conducted; we are arguing that the conditions for this to obtain are not presently realized.
Significance
The conclusion is significant: DURC, as it is currently practiced, though it may be scientifically valid, may not be ethically justified. It fails the conditions for ethical permissibility because its governance is fragmented and insufficiently responsive to its risks. To observe that scientific progression has far outstripped ethical and regulatory reflection is to tacitly admit that science has progressed with ethical blinders on. Our argument has been straightforward: the ongoing discussion of what ethical DURC should look like falls well short of what DURC actually looks like. This serves as a clarion call for the research community, professional organizations, publishers, governments, and the international community to ramp up efforts to ensure that all stakeholders are well equipped to fulfill their ethical responsibilities vis-à-vis DURC. This includes better education and training of researchers about DURC, the instantiation of DURC-specific oversight with domain-specific expertise at research institutions and organizations, and more expansive and integrated regulations and guidelines at the national, professional, and international levels. We invite further practical actions to advance the ethical infrastructure necessary for DURC, so that we can all benefit from it.
Footnotes
Acknowledgements
We would like to thank the reviewers for helpful feedback on this manuscript.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
