Abstract
In 2011, for the first time ever, two scientific journals were asked not to publish research papers in full detail. The research in question was on the H5N1 influenza virus (bird flu), and the concern was that the expected public health benefits of disseminating the findings did not outweigh the potential harm should the knowledge be misused for malicious purposes. This constraint raises important ethical concerns as it collides with scientific freedom and openness. In this article, we argue that constraining the dissemination of dual-use knowledge can in certain cases be justified because, for example: scientists have a responsibility for potentially harmful consequences of their research; the public need not always know of all scientific discoveries; uncertainty about the risks of harm may warrant precaution; and expected benefits do not always outweigh potential harm. However, the constraints in question are not absolute but can be both temporary and partial. We propose three core aspects for an ethics of dual-use dissemination: dual-use awareness, precaution, and acknowledgment of conflicting values. Additionally, to help scientists understand when constraints on dissemination may be justified we suggest three corresponding conditions that prompt scientists to recognize dual-use material or research, consider the potential impact of dual-use knowledge dissemination, and acknowledge and respond to external dissemination concerns.
Introduction
In late 2011, unprecedentedly, two scientific journals (Nature and Science) were asked by the US National Science Advisory Board for Biosecurity (NSABB) to delete details regarding both scientific methodology and specific viral mutations before publishing a research article on the H5N1 influenza virus (causing bird flu) (AAAS, 2011; Ledford, 2012). 1 The manuscript, submitted to Science by a Dutch research group, describes the transmission of the highly pathogenic virus via aerosols between ferrets 2 (Herfst et al., 2012a). The article caused controversy as the results suggested that human-to-human transmission of the H5N1 virus is likewise possible (Enserink, 2011a; Fouchier et al., 2012). Also, the possible development of the H5N1 virus could occur more easily than previously thought. The research aims to help the influenza oversight community to recognize early when a virus may become a public health threat, thereby possibly preventing a pandemic (Erasmus MC, 2012). However, the research simultaneously raises serious concern that the knowledge might be misused nefariously and cause a high-mortality pandemic exceeding the impact of the 1918 ”Spanish flu” (Berns et al., 2012). The scientists of both manuscripts agreed to postpone publishing their findings in full. A heated debate followed concerning the permissibility of constraining dissemination of scientific knowledge (Enserink, 2011b). On 29–30 March 2012, the NSABB recommended in favor of publishing the revised articles, one in full in Nature 3 and the other after further scientific clarifications were made in the manuscript (NIH, 2012; NSABB, 2012). Despite the NSABB decision in favor of publication, the H5N1 research findings by the Dutch research group were additionally held back in the Netherlands (where the research was carried out) owing to European Union (EU) legislation that required export control permits for dual-use materials and information. The scientists disagreed and maintained that export control laws were not applicable to their case (Butler, 2012). In late April 2012 the research group was given an export license, allowing them to send a revised manuscript to Science (Enserink, 2012). The article was finally published on 22 June 2012 (Herfst et al., 2012b).
The H5N1 influenza research provides an excellent example of a dual-use dilemma in the life sciences as it has the potential to be highly beneficial to public health but also risks being detrimental to it by deliberate misuse. The ethical question emerges whether the life sciences’ fundamental values of freedom and openness need to be challenged as results potentially could be too harmful to disseminate freely. Important values are thus at stake. However, despite the inherent ethical nature of this dual-use dilemma, traditionally it has been neglected in bioethics (Selgelid, 2010). Research ethics, which is the field within bioethics where this dilemma would belong, has focused more on how to conduct scientific research ethically than on the ethics of producing and/or disseminating scientific knowledge (Douglas and Savulescu, 2010). This article aims to help fill this gap by exploring some ethical dimensions associated with the dissemination 4 of dual-use knowledge, using the H5N1 influenza research to highlight some important considerations. Firstly, we will argue that it may be justified to constrain dissemination of dangerous knowledge. Such constraints are in this article considered in the context of scientific responsibility, i.e. not in terms of externally imposed constraints. Secondly, we propose three “constraining conditions” aimed to express the dual-use responsibility embedded in an ethics of dissemination. These conditions aspire to assist scientists in anticipating when constraining the dissemination of scientific knowledge may be appropriate.
Arguments for constraining dissemination of dual-use knowledge
Attempts to constrain dissemination of scientific findings typically involves a compromise of important scientific values that are vital to the process of generating knowledge and making advancements in both health and security. These values include scientific freedom, openness, reproducibility, and independent verification – values that are instrumental to other ends, compared to knowledge itself, which may be considered to also have a final value. However, from a security perspective, some research findings are considered too dangerous to disseminate in the open literature, as this could alert terrorists, individuals, or states to new ways of producing biological weapons, as well as providing the instructions for how to do it (Selgelid, 2009). Or, as bioethicist Arthur Caplan, reportedly, bluntly has put it: “We have to get away from the ethos that knowledge is good, knowledge should be publicly available, that information will liberate us...Information will kill us in the techno-terrorist age…” (in Atlas, 2002).
Thus, without disputing the importance of the scientific values mentioned above, we propose that arguments can be made that might justify constraining the dissemination of dual-use knowledge in certain situations.
Compromising scientific values of knowledge, freedom, and openness
Values associated with the scientific tradition to freely disseminate knowledge compete with other societal values. Although in this article we do not consider the moral status of the knowledge itself, one thing should be mentioned about the value of knowledge in relation to its dissemination. Knowledge can be considered to have an intrinsic value, i.e. a value independent of its valuable uses. However, this does not entail that its potential usage can be ignored (Douglas and Savulescu, 2010). To consider knowledge only in intrinsic terms risks reducing scientists to mere producers of knowledge, without responsibility for its potential (mis)uses.
Few believe that scientists should be given complete autonomy in their work or that they are free from responsibility to consider the potential consequences of that work beyond the realm of science. The life sciences are, for example, subjected to research ethical constraints as illustrated by the numerous ethical guidelines concerning, for example, animal welfare, human subject research, recombinant DNA, and other professional norms (Kass, 2009; Zilinskas and Tucker, 2002). Whilst these constraints do not, per se, justify constraints on dissemination, they do illustrate that values other than academic freedom and openness have been considered important to protect and that constraints inflicting upon the latter have been endorsed by the scientific community before. Under circumstances where the consequences of disseminating knowledge are potentially devastating, the values of human health and security may justify constraints. We suggest that the potential security implications of disseminating scientific knowledge are as important to consider as are values of academic freedom and openness. Health and security are basic human needs that must always be taken into careful consideration. Academic freedom and openness are fundamental requirements for the possibility of scientific progress, and are thus also of great importance. It is consequently paramount to find a sound balance between these values.
The reasoning “if I do not publish, someone else will” is not ethically tenable
An oft-discussed objection to constraints is that not publishing results would only delay the spread of scientific information temporarily, because the scientific information can be rediscovered at any time by individuals not adhering to the regulatory regime (Zilinskas and Tucker, 2002). The information may, subsequently, be published elsewhere or be made public through other, more informal, channels. In other words, if I do not publish, someone else will disseminate the information. True or not, this perception rests on a morale that is incompatible with individual responsible research conduct; even if publication by me would not have a significant impact on when the public gets to know, I have a responsibility as a scientist not to be the one releasing the information. To do wrong could corrupt me even if no bad consequences followed from my publication. To accept as valid such an excuse for anyone being in the same position would certainly lead to a rapid spread of information in every case, which renders this action-guiding principle untenable.
The public needs not always know
Suggested constraints implying that descriptions of materials and/or methods should be omitted from publications have been heavily criticized for compromising the essence of the scientific process as well as the value of knowledge (Atlas, 2003). One could argue that the possibility to omit information may invite breaches of research ethical norms (such as not to cheat) and lead to abuses and research errors. The position is this: if research is to be published, then the information necessary to verify and reproduce the experiment should be intact, otherwise it should not be published at all. Although we agree with this position as a ground rule, we believe that there may be exceptional circumstances where publishing with omissions, excluding access by the general public, is a reasonable option. In these cases, information can still be conveyed that the research exists and can be retrieved on a need to know basis. As the case of the H5N1 influenza virus study reveals, arguably many scientists within the influenza community need to know the details of the H5N1 study. Demands have consequently been raised on a written plan to ensure that the omitted information will be provided to those scientists (AAAS, 2011). Such a plan is essential, in cases such as this, and must include strategies on who will decide who is an eligible recipient and by what standards, as well as on how the omitted knowledge is to be obtained securely. Cooperation between the security and scientific communities in this quest seem necessary.
Limiting access by constraining not only content, but also where and how one publishes may also be considered. Publishing research findings in specialized scientific journals can often be considered of lesser concern than publishing in a popular journal. For example, an article on mouse pox virus, considered of dual-use concern, was first published in Science, but then also in the New Scientist with the title: “Disaster in the making – an engineered mouse virus leaves us one step away from the ultimate bioweapon” (Nowak, 2001). One reason to publish was reportedly that the authors wanted “to warn the general population that this potentially dangerous technology is available” (Rappert et al., 2006: 18). This example begs consideration of whether the general public always needs to know (although the popular press article may not entail all the specifics enabling research replication). Access may be open and free, but drawing attention to the potential risks associated with this research in such an explicit manner may be unnecessary and even irresponsible.
Scientific research may be hampered and slowed down
A central controversy concerns fears that constraints on knowledge dissemination will stifle the scientific process and hamper beneficial advancements in health and security. This is a fine balance to strike, as scientists, on the one hand, may be deterred from pursuing beneficial research not only because of perceived risks of misuse but because their results may not be published. On the other hand, where potential consequences are perceived too dire to ignore, the uncertain outcome may demand precaution and restraint, supporting the conclusion that research sometimes should not be disseminated even though science may be hampered (Kuhlau et al., 2011).
A cautious approach was taken when deciding to postpone publishing the H5N1 study in full, but also in early 2012 when 39 leading influenza scientists agreed to a voluntary pause of 60 days on any research involving the viruses to “provide time for discussion” (Malakoff and Enserink, 2012). A forum for discussing the research was provided by the World Health Organization in mid-February 2012, when a meeting was convened 5 where, among others, the leading researchers of the two studies, scientific journals interested in publishing the research, funders of the research, and bioethicists were represented. A consensus was reached that delayed publication of the full research would have more public health benefit than publishing it in part and that the temporary moratorium on research with new laboratory-modified viruses should be extended (WHO, 2012a).
The H5N1 influenza research highlights the uncertainty concerning both the magnitude of the public health consequences of its dissemination (should it be misused) and the actual probability of its misuse. In situations where great uncertainty prevails about the outcome, but one of the possible outcomes may have devastating consequences, precaution may be warranted, which occasionally would justify constraining dissemination of research (Kuhlau et al., 2011). A principle of precaution urges reflection on the possibility of disaster beforehand and to proceed with caution in light of this awareness (Munthe, 2011). Precaution would be warranted as dissemination of scientific research results, despite good intentions of scientists to improve public health, may be misdirected and have the opposite effect of worsening it. To be precautious implies to take measures against such potential threats of harm.
Scientific research will not be indisputably seriously and/or permanently damaged by constraints. As was the case in the H5N1 influenza controversy, the moratorium constitutes a precautionary measure and a constraint that temporarily slowed the scientific process down. This pause provided time for discussions between relevant security, science, public health, and ethics experts, to try to overcome and better understand prevailing uncertainties. Another outcome of the WHO meeting in 2012 was the recognition that focused communications are needed to reduce anxiety among the public, increase awareness of why the research is significant, reassure that the work can be done safely and securely and explain why the details need to be published (Herfst et al., 2012a). This illustrates the importance of communication on behalf of the scientific community as well as the need for dialogue with other, relevant parties. Ideally, decisions on whether, when or how to publish are based on complete and reliable information that has been carefully weighed. Such decisions can only be made when all parties engage in open communication where multiple arguments are allowed and stalemates owing to polarized positions can be avoided.
Potential harm may outweigh expected benefits
As our discussion suggests, compromises may be justified when scientific knowledge threatens other important values, such as the right to health and security. To protect these values the professional responsibility to do no harm may supersede the responsibility to do good. This implies that scientists occasionally have to acknowledge that harm prevention may be placed ahead of beneficial expectations and their own academic interests and careers.
The controversy surrounding the H5N1 studies illustrates two important points; that the harm-benefit equilibrium is difficult to assess and that it is generally uncertain who has the “burden of proof.” Two main arguments for publishing the findings are that development of new treatments or vaccines against similar strains will be facilitated, and that it will help public health scientists to monitor changes of H5N1 occurring in nature and take active control measures (Enserink, 2011c; Selgelid, 2011). Counter arguments uphold that it is unclear that vaccine protection against the engineered strain would correlate with vaccine protection against a future strain evolved in nature (Ingelsby et al., 2012). Also, even if there is early warning that an H5N1 virus is emerging in nature, it may be worthless because containing new pandemic strains of influenza in the past has proved to be difficult (Selgelid, 2011). There is also general concern that novel biological weapons can be developed rapidly, whereas countermeasures could take several years to develop and vast resources to employ (Zilinskas and Tucker, 2002). Providing the information for reproduction at an early stage may therefore be counterproductive. For example, widely publishing novel information about a virus we have yet no vaccine against exposes societal vulnerabilities which may inspire a presumptive abuser.
In this context, the burden of proof is usually placed upon the security community to demonstrate the existence of a threat of harmful misuse and that constraints will effectively reduce that threat. However, if applying the same logic, the scientific community would likewise have to establish scientifically that their work will decisively not contribute to harmful misuse and demonstrate that constraints will unduly hamper research (Kuhlau et al., 2011). These demands seem equally unreasonable. Rather, because a clear causal link between the act (peacefully intended research) and the misapplication for harmful purposes is lacking, the burden of assessing probabilities should be shared. Owing to uncertainty about risks, that is based on incomplete information, scientists cannot (and should not) always make decisions to disseminate knowledge independently. To govern risks under uncertainty thus requires deliberation between scientists and security experts (as well as other stakeholders).
A contributing factor to the difficulty of scientists to fully appreciate the risks at hand is that sometimes recommendations to the scientific community concerning publication of dual-use research would be based on classified information (Resnik, 2010; Selgelid, 2007). Although such lack of transparency certainly is problematic, it may be something we will have to accept under extreme circumstances. This may possibly be overcome by confidentiality agreements allowing key scientists to share the classified information. In our view, delaying publication in order to weigh all information seems reasonable in these cases.
An ethics of dual-use dissemination
Claims were made in the H5N1 controversy that the research should not have been conducted in the first place (Enserink, 2011a). Reviews of experiments of dual-use concern before they are funded and conducted are indeed imperative, and such mechanisms are also likely to reduce the need to constrain dissemination. However, already in 2004, in a widely cited report, criteria were suggested for judging whether research is to be considered of dual-use concern. The H5N1 studies qualify as research of concern by meeting two of the criteria – namely, by increasing the transmissibility of the virus and altering its host range (from bird to mammals) (NRC, 2004). Despite the fact that the H5N1 research may be deemed of dual-use concern, it was funded and conducted. Although one may argue that the reaction against the H5N1 research came too late in the process, we would argue that the possibility to constrain dissemination constitutes an important “final gatekeeper” in the research process.
An ethics of dual-use dissemination is important as life scientists may be considered to have a responsibility for what they disseminate in terms of potential harmful misuses, and scientists would need guidance to be able to take such responsibility. A responsibility not to disseminate potentially harmful research exists because scientists should not only adhere to the principle of beneficence (to do good) but also to that of non-maleficence (do no harm and do not impose risks of harm) (Kuhlau et al., 2008). The responsibility to occasionally constrain dissemination of research results can be considered both a responsibility of the individual scientist but also as a responsibility one shares as a member of a research group or the scientific community at large (Nordgren, 2001). Although a shared responsibility exists not to cause harm, within this the individual responsibility may vary depending on, for example, research domain and professional position.
Integrating aspects of dual-use dissemination would, in our view, constitute an important contribution to ethical research conduct in the life sciences. Drawing on our discussion, three aspects could be included in such ethics: (i) dual-use awareness, enabling identification of a dual-use dilemma; (ii) precaution, enabling reflection and cautious behavior in situations where dissemination of knowledge may pose serious risks of harmful outcomes; and (iii) acknowledging conflicting values, prompting a recognition that potential harm in certain research circumstances may outweigh expected benefits. From these aspects three constraining conditions may be construed:
if you are working on materials or a type of research that poses a significant risk of contributing to detrimental destruction in case of misuse, and
if this potentially dangerous knowledge would be disseminated for the first time or, if already published, its further dissemination could contribute to increasing the risk of harmful consequences, and
if the knowledge is considered by security authorities to pose a threat of harm that possibly outweigh the beneficial expectations,
then the dissemination of your findings should be considered as subject to constraints, e.g. in terms of whether, what, when, or how to disseminate.
The first condition concerns the nature of one’s research and whether there are reasonable grounds to suspect that there is a risk for misuse. To even begin to recognize potentially dangerous knowledge, dual-use awareness is a prerequisite. Producing lists covering dual-use sensitive diseases and agents as well as research of concern constitutes one option for facilitating awareness. 6 There are, however, limitations in any agent-specific list. Suggestions have therefore been made to broaden the awareness of threats beyond the classical agents and to consider also the intrinsic properties rendering them a threat, and how these properties have been or could be manipulated by evolving technologies (NRC, 2006). Another indicator for when this condition may apply presents itself when working in novel and rapidly growing research fields where the time from discovery to application may be short and its application uncertain (Panel on Scientific Communication and National Security, National Academy of Sciences, National Academy of Engineering, Institute of Medicine, 1982). Working on the H5N1 influenza virus and exploring ways to increase its transmissibility would constitute such uncertain application. Other examples of such research in the life sciences would include some RNA interference and developments in synthetic biology. These examples illustrate possible dual-use indicators for our suggested condition, yet do not claim to be exhaustive.
The second condition focuses on the impact of the actual dissemination and concerns the novelty of the information when presented to the public. Constraints may be considered justified if potentially dangerous knowledge would be disseminated for the first time or, if already published, its further dissemination contributes to increasing the risk of misuse. If a version of the information is already publically available elsewhere and the new piece of information is of lesser or equal concern, the probability that the risk of harmful consequences is increased by publication is lowered. For example, the claim was made, by a lead scientist in the H5N1influenza study submitted to Nature, that redacting the articles would not eliminate the possibility of replication of the experiments because there is already enough information publicly available to allow someone to make the virus (Kawaoka, 2012). Similarly, concerning the study submitted to Science, scientists point out that the techniques they used to create the airborne H5N1 virus are not new and can be found in virology textbooks, and that the methods for creating similar viruses have already been published widely (Herfst et al., 2012a). Thus, according to this condition, the mere availability of information would not necessarily justify its further accumulated diffusion. The condition embodies a cautious approach where the great uncertainty of the risk of harmful consequences of dissemination is central.
The third condition, contrary to the other conditions, implies that scientists are influenced by external claims, rendering independent responses to dual-use concerns impossible. Because of contradicting values, and the fact that these may not always be recognized and/or acknowledged by scientists, it is important that a discussion and dialogue is continuously ongoing between scientists, public health authorities, ethicists, security experts, and policy-makers concerning expected potential harms and benefits of disseminating certain dual-use knowledge. Scientists are not in the best position to make those decisions independently as they cannot (and, in our view, should not) make complete risk assessments. Constraints may therefore be expected to be permissible when security experts express serious concern of harmful consequences, should certain dual-use knowledge be disseminated. It lies in the interest of the scientific community to acknowledge the concerns and meet and contemplate them open-mindedly by communicating with the security community and other relevant stakeholders.
Concluding remarks
The heated discussions surrounding the controversial publication of the H5N1 influenza virus research findings have accentuated the need for more bioethical analysis of the ethics of disseminating dual-use knowledge. In this article we have argued that constraining the dissemination of dual-use knowledge in certain circumstances is justified, e.g. when great uncertainty prevails about potential risks of serious harmful consequences, and when potential harm may outweigh the expected benefits of dissemination. We have also proposed that an ethics of dual-use dissemination should include dual-use awareness, precaution, and acknowledgment of conflicting values. These aspects can also be construed as three “constraining conditions” aimed to guide scientists in anticipating when constraints may be appropriate. These conditions outline a responsibility to recognize dual-use material and research, consider the potential impact of dual-use knowledge dissemination, and acknowledge and respond to dissemination concerns posed by the security community.
Considering the proceedings surrounding the H5N1 research controversy, some points may be highlighted in relation to our proposed ethics of dissemination. Arguably, a responsibility to adhere to our suggested conditions might have affected the scientists involved in the study and the scientific community to act differently. Some might claim that such responsibility would have avoided the H5N1 controversy altogether as the dual-use potential of the research would have been assessed long before the final stage of publication. Nevertheless, this was not the case and may not always be the case in the future. On a scientific community level, if the community desires to govern dual-use research without external restrictions, structural opportunities to cultivate a dissemination responsibility need to be provided to scientists. This can be accomplished through dual-use education and awareness-raising as well as by facilitating deliberation among scientists. In deliberations, scientists would be invited to reflect on dual-use matters in terms of precaution and conflicting values at stake. On an individual scientist level, assuming a dissemination responsibility might have affected whether, what, how, and when scientists communicated about the H5N1 research findings to the public. Arguably, the research would not have been discussed by Ron Fouchier (a leading scientist of the project) at a conference in Malta before publication and he would not have communicated risks to the public in terms of statements such as: [we have created] “probably one of the most dangerous viruses you can make” (Enserink, 2011a). A heightened awareness of dual-use issues, acceptance of precaution and acknowledgment of conflicting values would, arguably, have rendered less controversy and misconceptions had a more carefully weighed communication on the H5N1 research taken place.
By assuming a dissemination responsibility dual-use deliberation might have been initiated at an earlier stage. The constraining conditions would have helped scientists involved in the H5N1 studies to anticipate constraints on the dissemination of their manuscripts and they could, therefore also at an earlier stage have engaged in deliberation with other stakeholders. It is, however, noteworthy that deliberation on the H5N1 research did occur at the meeting convened by the WHO. Even more importantly, perhaps, future meetings are planned that are thought to include broader dual-use issues and other stakeholders (WHO, 2012a; 2012b). This anticipated dual-use communication demonstrates an effort to formalize dual-use conversation and to develop a global governance process where potential risks and expected benefits of certain research can be addressed by various stakeholders.
The possibility to constrain certain research, in our account, need not imply full censorship. Rather, they are more likely to constitute temporary and/or partial measures to set reasonable limits on the dissemination of dual-use knowledge. In this sense constraints may be perceived as measures that give rise to procedures that allow time for science and security deliberation, thereby increasing the likelihood of well-founded decisions on what is disseminated (or not).
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
