Abstract
Dual Use Research of Concern (DURC) has been well analyzed regarding the life sciences. This article explores the topic of younger fields of medical research and their potential for misuse, especially in the military context. The areas of research considered are artificial intelligence, neurotechnology, and neuroenhancement. Each of these areas have brought forward highly promising new research. However, in light of the current armed conflicts in Europe and in the Middle East, there is a need to consider what the potential harmful consequences of medical research are. Using the example of war, this article demonstrates various instances of how current medical research could be—or is being—misused and discusses various possible solutions to the dual use dilemma. The main finding is that there needs to be a more concise and international effort to prevent the misuse of research. The raising of awareness in the general medical research community for the topic of DURC is one of the simplest steps that should be undertaken in order to ensure the non-maleficence of global research. Additionally, considering the potentially far-reaching consequences of DURC, it is time to consider the introduction of a new intergovernmental agency to monitor research and establish safeguards in order to cover all fields of research.
Keywords
Introduction
As the events in Ukraine and in Israel/Gaza show, war is, bleakly, an ever-present aspect of human life. Most people probably associate war with weapons and soldiers; medical research is not usually an aspect that comes to mind. However, the two topics are not as unrelated as one might think. Artificial intelligence (AI) is one of the more prominent new technologies to be introduced into the medical field. Becoming increasingly present in all our lives, AI is likely to revolutionize medicine. There are several further breakthroughs in medical research that have created headlines in the past few years. These include findings in neurotechnology and neuroenhancement as some of the much-discussed discoveries in the field of neuroscience (Cornejo-Plaza and Saracini, 2023; Tubig and McCusker, 2020). As fascinating and wonderfully beneficial as these developments often are, they do sometimes, unfortunately, have the potential for—or actually end up—being used in warzones.
The aim of this article is to demonstrate the dual use potential in more recent medical research, particularly in fields outside of the life sciences that have so far been discussed less in the context of dual use research. Using the example of (mis-)use of medical research in war, this article showcases the need for a new international and concise effort to systematically minimize the harm that can be done with the products of our well-intended research (see Gruszczak and Kaempf, 2024; Schmidt et al., 2020).
Starting with a general introduction on DURC, AI, and neurotechnology, those key developments in medical science and military research are brought together to focus on this sensitive combination of instruments with substantial risks.
Dual use research of concern
Commonly referred to as Dual Use Research of Concern (abbreviated DURC), this potential for misuse has been a topic of debate in various areas of research. The World Health Organization (WHO) defines DURC as “research that is intended to provide a clear benefit, but which could easily be misapplied to do harm” (WHO, 2020) In medicine, the focus of DURC has mostly been on the life sciences, while a similar concept can be found in institutions assessing the future impact of new technology. The ethical conflict posed by DURC is the question of how we can ensure the non-maleficence of our research without infringing the right to freedom of research and without inhibiting necessary and beneficial research. There is already extensive research and work being done on the dual use potential for research in the life sciences, such as the danger of working with highly pathological substances, including guidelines by the World Health Organization (WHO, 2022). The following paragraphs demonstrate the dual use potential in other areas of medical research that have recently become more relevant, especially regarding the (mis-)use of medical research in war.
AI in Medicine
AI refers to “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” (Copeland, 2023). It has now been part of medicine for some years. Most fields of research, diagnostics and medical care have by now incorporated AI in a supporting function (Hamet and Trembley, 2017; Lechner, 2023). AI applications in medicine can be divided into two main branches: the physical and the virtual branch. The physical branch refers to objects such as robots (i.e. care-bots or surgical robots) and medical devices. The virtual branch of artificial intelligence incorporates machine learning (ML) algorithms such as pattern finding, classifications, and decision-making tools (Hamet and Trembley, 2017). Machine learning enables a computer to collect and analyze data in order to learn unsupervised. One type of ML is deep learning, a technique which imitates a human brain by using multiple layers of data to enable decision making (Ahmad et al., 2021). Some of the obvious advantages of AI as opposed to humans relevant in medicine are the ability to learn and store vast amounts of information, the ability to work without breaks, its accuracy (e.g. the dexterity of surgical robots or the ability to see patterns invisible to the human eye) and the ability to consistently apply the most recent scientific findings. It is making medical care both more efficient and more accurate in many ways (Pinto-Coelho, 2023). It could even be argued that as AI becomes ever more precise, it may be negligent
Dual use potential of AI in medicine
The idea that AI has the potential for dual use is not a new one. Militaries across the globe, with China perceived to be the front runner, have raced to modify AI technology to work as a war tool (Carrozza et al., 2022). Bleakly, this is now as relevant as ever in the context of the Ukraine war, where both sides are using AI technology for greater efficiency and accuracy, showing us that weapons like special drones are becoming more and more “intelligent” (Fontes and Kamminga, 2023).
AI as a topic in general, as well as the potential risks that come with it, have been the subject of public debate for some time. The population appears to have mixed feelings about the application of AI in their day-to-day lives. In a study containing 122 participants from Germany, AI was estimated to be likely to perform unpleasant tasks for people as well as promote innovation, which were both seen as positive. However, AI was also estimated to be likely to be hackable and to be “influenced by the elite”—both perceived as negative. Equally negatively perceived, but estimated to be an unlikely task of AI, were moral decision-making and the control of dying, both of which fall into the medical field (Brauner et al., 2023).
Regarding the issue of misuse, there have also already been examples of medical AI being identified as having dual use potential. One of these examples was a US pharmaceutical company that was developing an AI based virtual drug discovery program aimed at developing better drugs faster. The company was invited to an international security conference in September 2021 to give a presentation on how their research could be misused. The result of their thought experiment was that their software could quite easily be modified to create chemical and biological weapons, including until now unknown agents (Evans, 2022). Several chemical weapons experts stated this to be a so far unthought of idea, and the researchers received various requests for the exact molecules they had generated, which they denied (Urbina et al., 2022). The researchers themselves described it as naïve not to have thought of the possibility of misuse before (Evans, 2022). The report on the workshop described a “worrying lack of awareness about these issues in the communities that are pursuing these technologies, and little oversight, despite the rapidly growing number of companies that are active in AI.” (The Swiss Federal Institute for NBC-Protection, 2019). The World Health Organization had also flagged this potential threat in a report in 2021, identifying it as an issue likely to occur 5–10 years in the future (WHO, 2021).
Another frequently cited concern of AI in medicine is the immense amount of patient data that needs to be harvested and the worry of breaches of confidentiality. Medical data is especially vulnerable, as it is particularly in demand on the black market. In fact, more than three out of four (76.59%) of data breaches recorded on the Privacy Rights Clearinghouse database from 2015 until 2019 were in the medical field (Seh et al., 2020). The reason for this especially high demand compared to other sectors such as finance or government is the large amount of information about an individual that is often stored in medical data, and which can easily be used for identity theft and blackmail. In the recent war in Ukraine, civilian hacker groups on both the Ukrainian and the Russian side have been targeting hospitals and pharmacies (Tidy, 2023). “KillNet,” a pro-Russian hacktivist group formed after the invasion of Ukraine has been reported to have targeted various healthcare organizations in pro-Ukrainian countries and has in the past threatened to sell health data of US citizens and to shut down life-saving ventilation systems in British hospitals (Health Sector Cybersecurity Coordination Center, 2023). While neither threat was materialized, they show that the idea and the willingness to target healthcare data and hospital systems in the context of war do exist.
There are some core aspects of AI that make it vulnerable to misuse. One aspect is that AI does not—yet—have what we would consider a “conscience.” AI does not consider things as right or wrong—unless instructed to do so by an algorithm provided by someone else. AI is also unquestioning of the information it is fed with. For instance, a clinical decision-making tool could quite easily be biased against age or gender, depending on what data it is provided with. This is already an issue, as medical AI has often been found to perform worse for minority groups due to the smaller amount of data it could draw from (Kaushal et al., 2020). While these concerns also apply to any area AI is being used in, it is indisputable that the medical field comes with its own unique challenges. Not only does the constant contact with drugs and pathogens make a potential misuse especially serious. Another important aspect to bear in mind is the vulnerability that patients invariably bring with them. The field of medicine deals with some of the most intimate aspects of people's lives and this must be considered when debating the importance of avoiding misuse. Additionally, the decisions made in medicine typically have far reaching consequences and can potentially be life altering or even lethal when made without care. Combining the two fields of medicine and AI, both of which are vulnerable to dual use, makes the introduction of stronger regulations and safeguards desirable.
Current research and dual use concerns in neurotechnology
A special field of medical research that has over the past years provided us with enormous amounts of knowledge is the field of neuroscience. As we gradually begin to understand more about how the brain and nervous system work, we are also starting to be able to influence its workings in various ways. One example of how we are able to influence our nervous system is the field of neurotechnology. It is defined as “the assembly of methods and instruments that enable a direct connection of technical components with the nervous system” (Müller and Rotter, 2017). This rather broad field has expanded hugely in the past decades, enabling us to accomplish things we never dreamed to do before. We are, for example, able to create prosthetic limbs that can be controlled by its wearer via neural implants or improve bladder control via spinal cord stimulation (Prochazka et al., 2001).
There is significant potential for this area of research to transform our lives infinitely for the better, with the possibility of treatment for formerly severely debilitating conditions on the horizon. However, this research, once again, can be applied to other areas than initially intended. It is an area of research that militaries have particularly taken an interest in. The Defense Advanced Research Projects Agency (abbreviated DARPA) is a subdepartment of the US Department of Defense responsible for funding and developing research intended for use by the US military (Gallo, 2021). In 2018, DARPA awarded funding to six different research teams in order to develop next-generation nonsurgical neurotechnology (DARPA, 2019). This research is partially aimed at improving service members and veterans’ quality of life, such as potentially curing neuropsychiatric illnesses or restoring sense of touch (DARPA, 2019).
On the other hand, the same sort of technology can actively be used in battle. Brain-Computer Interface (BCI) is a technology that enables a computer to essentially understand a human’s intention and act accordingly (Jeong et al., 2020). With the help of BCI, it may soon be possible to enable swarm control of unmanned aerial vehicles such as drones (DARPA, 2019). In fact, prototypes already exist (Jeong et al., 2020). The same sort of technology is actively being researched by other governments such as Germany, where the “Agentur für Innovation in der Cybersicherheit“ (i.e. “Cyberagentur”), the governments agency for the development of cybertechnology, is researching BCI with the hope of not only potentially healing speech impediments, but also with the idea of developing technologies for controlling drones without the need for a remote control (Vogt, 2021).
Interestingly, the public seems to be rather aware of and uneasy with the dual use of neurotechnology. In a study published in PLOS One, researchers asked the German general public for their opinion regarding the moral acceptability of the treatment versus the enhancement purpose of neurotechnology (Sattler and Pietralla, 2022). Their findings included the fact that the public substantially preferred the use of neurotechnology to restore lost functions rather than its use to enhance human abilities (Sattler and Pietralla, 2022).
Neuroenhancement and its (dual) use in the military
Neuroenhancement refers to the improvement of mental capacities and abilities (Kipke et al., 2010). It can be considered as part of neurotechnology (Cornejo-Plaza and Saracini, 2023), but is worth discussing by itself. The idea of enhancement of medically healthy individuals in order to augment certain abilities is one that has been explored extensively in the form of pharmacological neuroenhancement (pNE; Daubner et al., 2021) neuromodulation and neurofeedback (Brunyé et al., 2022). Importantly, neuroenhancement refers to interventions that are neither medically indicated nor a sensible secondary prevention (Daubner et al., 2021). The targets of neuroenhancement can be manyfold. They include the increase of attention span and concentration or the improvement of mood and social skills (ibid.).
Since neuroenhancement uses medical knowledge for non-medical purposes, that is, for different purposes than originally intended, it is arguably a dual use technology in and of itself. The moral acceptability of neuroenhancement has been broadly discussed, due to various concerns such as how safe the methods of neuroenhancement are (Chatterjee, 2013), whether healthy people should be taking pharmaceuticals and whether we should be tampering with and enhancing our brains in the first place (Forlini and Hall, 2016).
An important part of the debate is, however, the acceptability of neuroenhancement in the military. The idea of the super-soldier has ceased to be simply a creation of science fiction but is becoming a more and more realistic option (Sattler and Jacobs et al., 2022). Enhancing concentration, reducing fatigue and boosting morale in austere environments is of course something the military has an interest in. While the hope is that better—enhanced—soldiers would make for a cleaner war, with less human error (Beard et al., 2016), it raises various ethical questions.
Within NATO, military research has—as far as we know—primarily been focused on neurofeedback as well as neuromodulation in the form of transcranial magnetic stimulation (TMS) and transcranial electrical stimulation (TES) amongst others (Brunyé et al., 2022). One of the leading principles of medical ethics is the principle of autonomy. Particularly in a military setting, where individual autonomy has been known to sometimes be of a lower priority than in the civilian sector, the use of neuroenhancement is ethically highly debatable (Visser, 2003). 1 This becomes even more relevant when neuroenhancement is used not on their own soldiers, such as the use of amphetamines by the US military in the Korean war (Lin et al., 2013), but on enemy combatants, such as the idea of the use of oxytocin in interrogation settings in order to increase trust (Sattler and Jacobs, 2022).
Another relevant issue with the use of neuroenhancement in combat is the question of legal consequences and accountability—who is responsible when “enhanced soldiers” end up making mistakes or even committing war crimes? (Beard et al., 2016) Furthermore, an important aspect of the use of neuroenhancement in a military context is that the logical consequence of it is the modification of the nature of armed conflict by further strengthening the determinative role of technological superiority over factors such as tactics, popular opinion and soldier moral (Harper, 2023), thereby weakening the controlling entities on an unjust war.
To evaluate the acceptability of the use of enhancement methods for military purposes, a Hybrid Framework was first proposed by Lin et al. (2013). This framework contains nine principles that should be adhered to, including the necessity, the soldier’s consent and the preservation of the soldiers’ dignity (Lin et al. 2013). In a recent study, British officers were surveyed regarding in how far they agreed with these principles (Sattler and Jacobs, 2022). Generally, the results of this study show a comfortingly high approval rate by the officers’ regarding the principles. Particularly the aspect of the necessity of the soldier’s consent was strongly agreed with (Sattler and Jacobs et al., 2022).
As research into the use of neuroenhancement for military purposes grows, it is becoming necessary for governments to formulate guidelines. This has already been done by Canada, which published an extensive report—“Identifying Ethical Issues of Human Enhancement Technologies in the Military”—in 2017. Within this report, they developed an assessment framework—the Military Ethics Assessment Framework (MEAF)—in order to identify which enhancement technologies might present ethical issues (Girling et al., 2017). MEAF is made up of twelve categories, including Compliance with National laws and Codes of Conduct, Compliance with Jus ad Bellum principles and Health and Safety. Out of 34 technologies put through MEAF in the report, 33 showed potential issues regarding Reliability and Trust, 30 regarding Equality, 26 regarding Health and Safety, 20 regarding Privacy, Confidentiality, and Security, 19 regarding Consent and 17 regarding Accountability and Liability (Girling et al., 2017), showcasing quite clearly that more work needs to be put into the ethical implementation of neuroenhancement in the military. So where does all this leave us?
Discussion
How do we minimize the misuse of our research? And does the (mis-)use of medical research in war matter? Regarding the use of medical research in the context of war, it is important to stress that it does not, of course, immediately follow that every military use of a technology is an immoral one. However, if research that is conducted for non-military purposes is weaponized, this does per definition constitute a dual use, and must as such be carefully assessed as to its potential consequences. This article is not a call to stop all military research, but it is a call for a critical view on military use of medical research insofar as, keeping in mind the potential for truly destructive technologies that the misuse of the above-mentioned medical research has, it is necessary to differentiate what technology will make for wars with less human casualties, and what is the introduction of yet more weapons of mass destruction. The necessity of research into new destructive technology simply out of fear someone else will otherwise do it has become a feeble argument with the existence of nuclear weapons. The world has enough deterrents; the creation of more must be declared ethically unacceptable.
As to the discussed areas of research, there is enormous potential for current findings to improve healthcare. There is no question of AI or findings in neuroscience not being used in the medical field; the advantages are too obvious. Therefore, it is a given that research in this field will and must continue. However, considering the very real potential for misuse that the research brings with it, as demonstrated by the examples above, it begs the question what we can do for our research to not have anything but a positive impact.
While the preservation of freedom of research is without a doubt an important goal, we must also consider that placing some constraints on certain knowledge is not necessarily censorship as much as it is an opportunity for security deliberation and well-founded decisions to prevent the dissemination of harmful information (Kuhlau et al., 2013). There are various solutions that have been proposed for the dilemma that DURC poses. One possible starting point is to enhance awareness of the risk amongst researchers. The hope is that if researchers are well trained in considering the possibility of misuse of the findings of their research, then they might design their research in a way that minimizes the risk of misuse without any outside force having to infringe upon their freedom of research.
There is of course also the question in how far researchers are even responsible for the misuse of their research. In a G7 statement regarding the security and misuse of research, the responsible parties for ensuring the minimization of risk were identified as governments, research funders, research institutions, and individual researchers (SIGRE, 2023). While governments provide a legal framework, we should be able to expect researchers to take the moral responsibility of having considered the possible consequences of their research to the best of their abilities and to make sure they have done as much as they can to avoid it. In a survey of the general population in the US and Australia in 2020, it was found that so far, 77% of the population were completely unaware of the risk of DURC (MacIntyre et al., 2020). Another study specifically analyzing postgraduate students of life sciences in Pakistan reported that a lower, but nonetheless, very substantial percentage of 58% had never heard of the term DURC, while 18.6% had heard of the term but were unsure of the meaning (Sarwar et al., 2019). These findings show the clear necessity of further work in raising awareness. Ideally, in order to minimize the research produced with obvious dual use potential, the research community needs to be trained to consider DURC right from the start, i.e. at university. The security implications of our research cannot be an afterthought.
Considering the possibility of misuse of catastrophic proportions, raising awareness—as important a step as it is—will likely not be sufficient in ensuring the beneficence of medical research. It may therefore be time to consider the introduction of an intergovernmental agency to monitor research with dual use potential and to establish safeguards to ensure the conscientious conduct of research, as has been done for the responsible use of nuclear energy with the International Atomic Energy Agency (IAEA). As the dual use of technologies such as AI and Neurotechnology could have consequences that affect all, it is a reasonable conclusion that it should be an international effort to prevent this from happening.
On a smaller scale, this has already been established by individual countries in various ways. Germany for example has introduced Commissions for the Ethics of Dual Use Research of Concern (“KEFs”, i.e. “Kommissionen zur Ethik sicherheitsrelevanter Forschung”). Most research sites have their own KEF or share one with another site. The commissions have a consulting function and are responsible for raising awareness and for monitoring research at their institution (Leopoldina and the German Research Foundation, 2016). Israel on the other hand has implemented a more tightly regulated system. Here, a government institution approves institutions for research with dual use potential. These institutions then implement further security measures, such as a committee that oversees the research and approves individual scientists. The committee can only approve research that meets certain security criteria, and without the approval of the committee, research cannot go ahead (Lev, 2019).
To make safeguards for DURC efficient, it is necessary to combine these individual national efforts into an international approach. Possible capacities of an intergovernmental agency might be the mediation and design of international agreements outlining commitments from governments to put in place measures to prevent dual use.
To minimize the risks of DURC, it needs to be approached bilaterally i.e. it needs to be addressed before and after the research in question exists. The raising of awareness and promotion of conscientious research is an important measure covering the minimization of risk before the creation of DURC. As to what to do with already existing research with dual use potential, the first step must be to find an effective way of monitoring emerging research and to develop an efficient system of identifying research of concern and countering the risk.
Due to the immense amount of research continuously published worldwide, a possible approach could be the use of AI for the Assessment and Identification of DURC (“AI-AI”). For reference, ChatGPT was trained using 45 TB of data, which is most of the readable data on the internet (Lechner, 2023). This ability of AI to process and analyze vast amounts of data could be used to the advantage of creating an efficient tool to assess potential threat in new research. If programmed well, such an AI could continuously monitor newly published research for dual use potential and flag it for review. These flagged publications would then be reviewed by the agency. If identified as research bearing substantial risk, individual solutions need to be found depending on the type of research.
There are two main consequences flagged research ought to have. One of them is a thorough review of the DURC assessment and prevention measures in the country and research site in question. Another is to incentivize research countering the dual use potential of the research in question if possible. To promote the acceptance of a regulatory institution and to take into account both of the ethical values of freedom of research and prevention of maleficence, it is important that the regulation of dual use research is conducted using a carefully balanced equilibrium of incentivizing responsibility and enforcing safeguards.
Conclusion
Regarding specifically the use of medical research in the context of war, the combination of the ethically highly relevant fields of medicine and war requires an enormous amount of careful moral deliberation. Much of the research conducted by militaries has been admirable and has provided us with incredible achievements, such as prosthetics with the ability to feel touch (DARPA, 2019). This type of research must of course continue. But it should be a given that research into the field of medicine, a field of inherent vulnerability, should not be done with the aim of creating weapons. We need to differentiate between research that is done for the increased welfare of the individual soldier, and research that is conducted with the sole aim of mission success (Sattler and Jacobs et al., 2022).
In regard to DURC in general, one of the simplest steps that must be taken is addressing the lack of awareness for the topic of DURC within the medical research community. We cannot rely on the responsible conduct of researchers if we have never taught them to consider the issue. This is especially true as the research debated is per definition research intended for good—it is understandable and natural that these researchers would not have misuse at the forefront of their minds. However, as demonstrated in the examples above, the risk that DURC poses has reached a level that calls for a much larger and international effort. The UN as well as the WHO are currently in the process of developing guidelines (
At present, the pace of research and development in the field of AI, as well as in the neurosciences, have by far overtaken the development of guidelines and the deliberation of its safety, and this must be addressed. There must be as large an effort to make new technologies safe as there is to make them better, particularly in the medical field.
Finally, it is important to note that debating the risks of DURC in current research is not fearmongering and not meant to be a demonization of AI or the neurosciences, but a necessary precaution so that we can maximize the benefits this new technology can provide us with for the good of all mankind.
Footnotes
Acknowledgements
The present work was performed in partial fulfillment of the requirements for obtaining the MD degree (‘Dr. med.’) at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany, under supervision of Prof. Dr. med. Andreas Frewer, M.A. We would like to thank the team of the Professorship for Ethics in Medicine (FAU) for the fruitful scientific exchange.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
All articles in Research Ethics are published as open access. There are no submission charges and no Article Processing Charges as these are fully funded by institutions through Knowledge Unlatched, resulting in no direct charge to authors. For more information about Knowledge Unlatched please see here: ![]()
Ethical approval
The authors declare that a research ethics approval was not required for this study.
