Abstract
The possibility that today’s drones could become tomorrow’s killer robots has attracted the attention of people around the world. Scientists and business leaders, from Stephen Hawking to Elon Musk, recently signed a letter urging the world to ban autonomous weapons. Part of the argument against these systems is that they violate the public conscience provision of the Martens Clause due to public opposition, making them illegal under international law. What, however, does the US public think of these systems? Existing research suggests widespread US public opposition, but focused on support for autonomous weapons in a vacuum. This paper uses two survey experiments to test the conditions in which public opposition rises and falls. The results demonstrate that public opposition to autonomous weapons is contextual. Fear of other countries or non-state actors developing these weapons makes the public significantly more supportive of developing them. The public also becomes much more willing to actually use autonomous weapons when their use would protect US forces. Beyond contributing to ongoing academic debates about casualty aversion, the microfoundations of foreign policy, and weapon systems, these results suggest the need for modesty when making claims about how the public views new, unknown technologies such as autonomous weapons.
Introduction
One of the most recognizable symbols of the US military over the last decade is the unmanned aerial vehicle (UAV), or drone. Platforms such as the MQ-1 Predator and MQ-9 Reaper, flown by US military personnel, have enhanced the ability of the US military to target terrorists and insurgents thousands of miles away. From concerns that drones are lowering the barriers to using military force and thus risking dangerous adventurism, to concerns about civilian casualties from drone strikes, to questions of whether American drone strikes occur in ways that are consistent with international law, scholars have raised questions about the use of drones (Boyle, 2015; Zenko and Kreps, 2014a, 2014b).
Yet, on the horizon is another military technology that could prove even more controversial than drones – autonomous weapon systems (AWS). An AWS is not remotely piloted, as drones are. According to the US Department of Defense’s directive on autonomy, an AWS is defined as “[a] weapon system that, once activated, can select and engage targets without further intervention by a human operator” (Department of Defense, 2012: 13). AWS, in the extreme, raise the specter not just of drones piloted from thousands of miles away, but also of the robotic soldiers and systems featured in movies such as The Terminator and The Matrix. A recent statement signed by Elon Musk, Stephen Hawking, and thousands of others points specifically to AWS as a danger to humanity, suggesting the need for a ban (Future of Life Institute, 2015). While world-destroying robots are likely far from the horizon, despite some concerns (Garcia, 2014, 2015), advances in autonomous systems in general, such as Google Cars, could make AWS’s increasingly plausible over the next few decades (Roff, 2014; Singer, 2009).
Some simple AWS already exist. For example, the Phalanx air defense system used by the US Navy and over 20 other countries to protect ships from incoming missiles, boats, and airplanes has an automatic mode (Scharre and Horowitz, 2015). Fear of offensive and more sophisticated versions of these systems has led groups such as Human Rights Watch (2012) to launch an effort to ban these systems, which they call “Killer Robots” (Asaro, 2012; Sharkey, 2012).
The United Nations Convention on Certain Conventional Weapons meetings in spring 2014 and 2015 included extended discussions of the potential practical, political, and moral issues surrounding AWS. One piece of the debate involves public opinion. Some argue that there is widespread public opposition to autonomous weapons in the US because they eliminate human control from the use of force, meaning autonomous weapons would violate the public conscience provision of the Martens Clause of the Hague Convention and could be banned for violating international law (Ekelhof and Struyk, 2014).
Given potential public opposition to giving up human control over the use of force, this paper explores potential tradeoffs that might influence public opposition to reduced human control. Any finding that context could mitigate opposition to autonomous weapons is important, due to the strength of the existing claim that autonomous weapons offend the public conscience and thus the Martens Clause.
Thus, this paper builds on research by Press et al. (2013), who find that normative opposition to nuclear weapons is mitigated by military utility; as the military utility of nuclear weapons increases in a given operation, public opposition to using nuclear weapons weakens. Similarly, this paper theorizes that contextual factors making autonomous weapons more necessary for US forces should mitigate opposition. Two survey experiments described below vary the military utility of autonomous weapons both explicitly and implicitly, as well as whether autonomous weapons will be used to protect US forces or launch attacks. The results show that, rather than being widespread, public opposition to AWS is contextual, as with nuclear weapons. Fear of other countries or non-state actors developing these weapons makes the public significantly more supportive of developing them, as does a perception that they are necessary to protect US troops from attacks. Beyond contributing to ongoing academic debates about public opinion, the microfoundations of foreign policy, and weapon systems, these results illustrate the need for modesty when making claims about how the public views new, unknown technologies such as autonomous weapons.
What is an autonomous weapons system?
Robotics are “widely present in the modern battlefield” according to roboticist Ronald Arkin (2013: 1). Growth in robotics and autonomous systems is also a major trend in the civilian sector. However, more autonomy does not necessarily mean autonomous weapons. An AWS is a weapons system that has the ability to target and fire on its own, unlike today’s drones. A variety of groups, from the Department of Defense (2012), to Human Rights Watch (2012), to the United Nations Special Rapporteur Christof Heyns (2013), define an AWS in similar ways (see online Appendix A; Scharre and Horowitz, 2015).
More prosaic than the robotic warriors of the movies, an example of a current autonomous weapon is arguably Israel’s Harpy. It is a cruise missile that, once launched, can loiter for hours over a target area. When it detects a particular type of radar system, it accelerates towards the radar and explodes. The Harpy selects and engages the target on its own (Heyns, 2013: 9).
Why study public opinion and autonomous weapons?
Why should scholars care about public opinion concerning autonomous weapons since bureaucrats and elites make decisions about the acquisition and deployment of weapon systems? Generally, public opinion is a microfoundation that can influence elite preferences, even if the influence is indirect (Tomz and Weeks, 2013). Moreover, this study fits with new work on public opinion and weapon systems that explores how public preferences vary based on military considerations (Press et al., 2013). There are also at least three specific reasons to study what the public thinks about AWS.
First, the Martens Clause, as revised for the 1907 Preamble to Hague Convention IV (1907), states Until a more complete code of the laws of war has been issued, the High Contracting Parties deem it expedient to declare that, in cases not included in the Regulations adopted by them, the inhabitants and the belligerents remain under the protection and the rule of the principles of the law of nations, as they result from the usages established among civilized peoples, from the laws of humanity, and the dictates of the public conscience.
Some non-governmental organization (NGO) groups argue that the Martens Clause, and by extension, international law, therefore “prohibits weapons that run counter to the ‘dictates of the public conscience’” (Human Rights Watch, 2012: 24, 2014: 16–17; Reaching Critical Will, 2013). These groups argue that a key component of their Martens Clause judgment is public opposition to autonomous weapons. It therefore has significant international legal and policy consequences. There is a debate over the applicability of the Martens Clause more generally, though that is beyond this paper (e.g. Evans, 2012; Schmitt, 2013).
Second, given NGO interest in the topic for the purposes of the policy debate, the existing research becomes important. The most prominent survey research on autonomous weapons to date, by Carpenter (2013, 2014), shows that 55% of the American public opposes developing autonomous weapons and 53% supports banning killer robots. The reasoning is that the public does not trust robots and does not want to trade off human control over the use of weapons. If this is true, it means that critics of AWS’s are correct to highlight moral concern among the US population (McNeal, 2013). Research by the Open Roboethics Initiative (2015) reaches similar conclusions, though with a more global sample.
Third, an international coalition of NGOs formed in 2012, the Campaign to Stop Killer Robots is already shaping industry decisions when it comes to investments, making understanding autonomous weapons in general a relevant topic for investigation. In 2014, for example, the Canadian company Clearpath Robotics (2014) announced its support for a “ban” on killer robots, as a way of spurring further discussion.
Moreover, autonomous weapons are an area where public knowledge is low and there are not clear elite preferences to cue the public. With the public lacking knowledge and having non-attitudes (Zaller, 1992), polling on autonomous weapons could reveal public preferences more cleanly than on other, more cluttered, issues.
Is there overwhelming public opposition to autonomous weapons?
Given the potential relevance of public opinion for the ongoing policy debate about autonomous weapons (for another application, see Walsh, 2015), this paper tests whether contextual factors influence public support for the development of autonomous weapons. If public opposition to AWS is strong due to the lack of human control in a way that violates the Martens Clause, then even in the face of policy conditions that highlight more favorable circumstances for developing AWS, we should see widespread public opposition. While there are many potential contextual factors one could test, this paper tests three arguments derived from existing research.
Longstanding political science research on public opinion and war suggests that a key contextual factor influencing public opinion is the risk to US military forces. Concern with casualties can increase public opposition to US military action (Berinsky and Druckman, 2007; Mueller, 1973), because the public believes protecting US troops is an important value. While the public may fear a loss of human control over weapon systems, that opposition may decline when faced with the prospect of US military casualties, because the public values the lives of US soldiers above other principles. Thus, for members of the public concerned about US military casualties, AWS should seem relatively more attractive if autonomous weapons are designed to protect US forces. 1
Hypothesis 1: Support for autonomous weapons should increase if they protect US forces
Press et al. (2013) show that public opposition to using nuclear weapons weakens when the public is confronted with scenarios in which using nuclear weapons are necessary to preserve US security. This logic of consequences, they find, drives public opinion, because the public values US security over moral qualms about nuclear weapons. Does this tradeoff exist for autonomous weapons? If autonomous weapons are like nuclear weapons and other systems, as their theoretical military utility increases, public opposition to autonomous weapons should decline because the public values effective weapons – even when they have qualms about those weapons. This would also be consistent with a variant of Gelpi, Feaver and Reifler’s (2009) argument that the public is less concerned with military casualties than the prospect of victory in wars.
Hypothesis 2: Support for developing autonomous weapons should increase as their military operational necessity increases
Another aspect of the logic of consequences stems from the relationship between how the US behaves and what other countries do. Press et al. (2013: 191–193) show that opposition to nuclear weapons stems in part from fear that their use could set a precedent to influence others. The respondents opposed to nuclear weapons tended to oppose them less because of absolutist moral concerns about autonomous weapons and more because they worried that the US use of nuclear weapons would set a precedent that could make other countries more likely to use nuclear weapons. If true in the autonomous weapons case, the development of autonomous weapons by other countries should reduce opposition to US development, since it would suggest that US development is not precedent-setting.
Hypothesis 3: Support for developing autonomous weapons should increase when other countries develop autonomous weapons
Research design
Prior research on public opinion and AWS asked respondents about their support or opposition for the development and/or use of AWS in a vacuum (for example, see Carpenter, 2013). While it establishes a baseline for understanding public opinion and autonomous weapons, asking about support for autonomous weapons directly makes it hard to distinguish support or opposition to weapons in general from autonomous weapons in particular. Additionally, since true AWS do not really exist right now, attitudes are potentially driven by the only exposure most people have to autonomous weapons: the movies and television. Asked in a vacuum about autonomous weapons, people might imagine The Terminator, The Matrix, or other portrayals in the media. In contrast, using particular scenarios and contexts for usage and/or development can mitigate the bias that can result. This matters because another potential alternative, asking about multiple weapon systems or scenarios in the context of the same survey condition, could cause question spillover that biases the results (Transue et al., 2009).
Thus, asking about the development of AWS biases against the hypotheses, since it sets people up, in general, to imagine negative media portrayals. In both experiments below, the dependent variable is the question: “[W]ould you approve or disapprove of the United States developing autonomous weapon systems?” The full text is available in the online appendix (pp.8–12). Respondents answered on a 5-point scale where 1 indicates strong support and 5 indicates strong opposition.
To test the hypotheses, we turn to survey experiments conducted in summer and fall 2015 using Amazon’s Mechanical Turk (MTurk) service, a platform for recruiting respondents increasingly used in political science research (for example, see Huber et al., 2012). While MTurk samples are more liberal and younger than a representative sample of the US population, treatment effects are generally in line with nationally representative samples, and with bias only pronounced if younger, more liberal samples will bias results in particular ways (Berinsky et al., 2012). Carpenter’s (2013) assessment of her nationally representative survey results also suggests that those sampling biases are unlikely to influence the results in this case, since opposition to AWS crosses age, gender, and party lines.
These inherent limits to MTurk do still limit the generalizability of any results. However, given the strength of the existing claim regarding public opposition and the fact that the bar to claiming that a weapon system violates the “public conscience” provision of the Martens Clause should necessarily be high, MTurk samples can still shed useful initial light on what conditions impact public opinion and autonomous weapons.
Results
Experiment #1
The first experiment has a 5x1 design. Despite the limitations noted above, it focuses on support for the US development of autonomous weapons (specific language referenced above) to track part of Carpenter’s (2013) research design and create comparable results. 2 The full text of the survey is available in the online appendix. Fielded in November 2015 on MTurk, there were 1043 respondents (full questions and demographics in the online appendix). As with prior MTurk studies, the panel was skewed towards liberal (45% Democrat, 20% GOP, 35% Independent) and young people. Respondents were placed in one of five conditions. In the baseline condition, respondents were asked in a vacuum about their support or opposition for the development of autonomous weapons, following Carpenter (2013). The other respondents received the same question, with additional context about autonomous weapons varying across conditions:
To protect US troops from attack and more effective than alternatives;
To attack adversary targets and more effective than alternatives;
To protect US troops from attack and not more effective than alternatives;
To attack adversary targets and not more effective than alternatives.
Thus, this experiment varies both the extent to which respondents are primed to think US troops are at risk, and the relative military utility of autonomous weapons, providing an initial test of hypotheses 1 and 2. There are, of course, a multitude of potential variations on these questions, but these provide a baseline from which future research can depart.
Figure 1 below highlights the percentage of respondents who support, neither support nor oppose, or oppose the development of autonomous weapons across experimental conditions. For presentation purposes, strongly support/oppose and support/oppose are condensed.

Support for the development of autonomous weapons across experimental conditions.
The results show that the context surrounding US troops and military utility significantly mitigates opposition to autonomous weapons. A 61% of respondents support developing AWS to protect US forces when they will be more effective than alternatives, compared to 38% in the baseline condition. Interestingly, even when autonomous weapons are no better than present weapons, 48% support their development to protect US forces. This suggests that the public is willing to make tradeoffs and overcome its opposition to a weapon system when US troops are on the line, supporting hypothesis 1. Given that the only previous research highlights significant opposition to autonomous weapons, this finding is substantively interesting. Moreover, while the effects are not as strong, support for developing autonomous weapons increases from 38% in the baseline and 41% in the condition where AWS are not more effective to 50% when AWS would be used to attack adversaries and are more effective than alternatives. That only 50% of the public supports developing autonomous weapons for attack purposes even in this favorable circumstance suggests hesitation does exist concerning their development.
T-tests demonstrate that these relationships are significant, and ordinary least squares (OLS) regression analysis of support for developing AWS across the conditions further supports the findings. We regressed approval on a series of pre-treatment demographic categories (military service, partisanship, age, and gender). The regression results, available in the online appendix, show that, even controlling for potential covariates, public opposition to AWS declines when AWS are either designed to protect US troops or are more effective than alternatives.
It is also possible that the lack of knowledge about AWS drives these responses. Therefore, we asked respondents two follow-up questions – whether the US deploys autonomous weapons now (the correct answer is “no”) 3 and whether the drones the US uses today are autonomous weapons (the correct answer is “no”). Re-running the regression on just the most informed participants, e.g. those that got both questions right (520 of 1043, or almost 50%), produces similar results.
Figure 2 shows the predicted level of support for developing AWS pooled across the attack/protect and more/not more effective treatment conditions. It demonstrates that respondents are more willing to make the tradeoff of reduced human control either when AWS are designed to protect US forces or when they are more effective than other options, providing more support to hypotheses 1 and 2. Even when controlling for demographic factors and other beliefs, shifting from the Not More Effective/Attack condition to the More Effective/Protect condition increases support for developing AWS by 15%.

The impact of respondent knowledge on support for developing AWS.
These results hold when limiting to the subset of informed respondents, though more informed respondents on average were less supportive of AWS, especially when AWS were described as not more effective than existing weapons (regression results available in the online appendix). This suggests that more informed participants are more relatively sensitive to the effectiveness of AWS and less relatively sensitive to whether AWS are designed to attack adversaries or protect US forces.
Moreover, consistent with research using nationally representative samples, the GOP variable in the regression model itself is negative and significant, indicating more support for force overall, as expected. The online appendix shows that the results are also consistent with:
Controlling for respondent hawkishness;
Controls for respondent usage of robotics;
Controlling for respondent support of drone strikes;
Re-weighting the observations to account for age and partisanship bias in MTurk.
One interesting question for future research is the logic of respondents who oppose AWS in the most favorable condition – where AWS are designed to protect US forces and are more effective than alternatives. Restricting to only those respondents, results in the online appendix show that those respondents on average were significantly more opposed to existing US drone strikes and knew AWS would be different from drones.
Experiment #2
Fielded in August 2015, the second experiment is a 4x1 design and had 802 respondents on MTurk. The outcome quantity of interest was again the question of “would you approve or disapprove of the United States developing autonomous weapon systems?” measured on a 5-point scale where 1 indicates strong support and 5 indicates strong opposition. Each respondent was randomly assigned to one of four survey conditions. In the baseline condition, respondents were asked about their support for the US development of AWS in a vacuum, with no additional information. In the “No Military Necessity” condition, respondents were asked if they would support the US developing AWS “if developing them was not necessary to ensure the American military remains as strong as it is today.” Respondents answered on the same scale as in the first experiment, with the complete text available in the online appendix. In the “Military Necessity” condition, respondents were asked about their support for developing AWS if it was “necessary to ensure the American military remains as strong as it is today.” This is a more general version of the proposition in experiment 1, which directly manipulated the effectiveness of the weapon systems. The vagueness of the description does place a limit on the results, but it extends the tests of hypothesis 2 about the relationship between support for AWS and military utility.
In the “Foreign Development” condition, respondents were asked about their support for the US development of autonomous weapons “if other countries and/or violent non-state actors were developing them.” This tests hypothesis 3, the precedent-setting logic described by Press et al. (2013), because opposition should decline if US development is not precedent setting.
As hypothesized, Figure 3 below shows that both the “necessity” and, especially, “foreign development” conditions led to significant increases (verified through t-tests and OLS) in support of the US development of AWS relative to the baseline condition. This provides additional support for hypothesis 2 and clear support for hypothesis 3. Support for US development increases by 69% (from 29% to 49%) as we shift from the baseline condition to when other countries or non-state actors are seeking AWS. While reticence to develop AWS still exists, even in the most “supportive” condition, note that opposition drops to 38%, and some percentage of the population would likely oppose the development of new weapon systems in general.

Support for the development of autonomous weapons across necessity and international development conditions.
The results are robust even when controlling for relevant demographic and political factors, including age, partisanship, gender, military service, hawkishness, views of robotics, robotic usage, and support for drones strikes (see the online appendix). The marginal effects show a 26% boost in support for developing AWS, moving from a condition where AWS are not necessary to a situation where they are necessary and others are developing them. They are also robust when re-weighting the sample to account for age and partisanship biases in MTurk samples.
Thus, even given the limitations of MTurk studies, both experiments shed new and relevant light on public opinion and autonomous weapons. The results show that rather than being staunchly opposed, public attitudes are strongly influenced by context around autonomous weapons. When the lives of US troops are on the line, when AWS may be more effective than alternatives, or when others are developing AWS, opposition reduces. This is consistent with political science research on public opinion, weapons, and war, though contrary to more extreme claims about public opposition to AWS.
Conclusion
Are AWS a positive, logical culmination of human efforts to reduce the danger of warfare, a negative danger that will make the use of force more likely, or, at the extreme, a catastrophic possibility likely to usher in the real-world era of the Terminator? The answer is unknown, in large part because AWS have not yet been extensively developed and deployed.
Understanding public opinion concerning autonomous weapons is a vital task for international relations research. It is both the subject of an ongoing NGO campaign, the Campaign to Stop Killer Robots, led by Jody Williams and other advocates of the Land Mine and Cluster Munitions bans, and exemplifies the relationship between technology, politics, and international dialogue in the 21st century.
Some NGOs have argued for a ban on autonomous weapons in part due to widespread public opposition. But the bar for claiming to speak for humanity should be high. The evidence presented above demonstrates that support for the development and use of autonomous weapons varies based on the scenario and context. Drivers include the potential to save the lives of US soldiers by substituting robots for people and fears of foreign development of autonomous weapons.
These findings contribute to growing academic discussions of public opinion surrounding weapon systems (e.g. Press et al., 2013), as well as casualty sensitivity, and the microfoundations of policy making. More directly, from a policy perspective, these results suggest that it is too early to argue that AWS violate the public conscience provision of the Martens Clause because of public opposition. There may be other reasons why AWS violate the Martens Clause, and there may be persuasive arguments for and against the development of AWS as a whole. However, more research is necessary to scope out how the context influences public support for and opposition to autonomous weapons.
Footnotes
Declaration of conflicting interest
None declared.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Carnegie Corporation of New York Grant
The open access article processing charge (APC) for this article was waived due to a grant awarded to Research & Politics from Carnegie Corporation of New York under its ‘Bridging the Gap’ initiative.
