Abstract
This article tests the proposition that new weapons technology requires Christian ethics to dispense with the just war tradition (JWT) and argues for its development rather than dissolution. Those working in the JWT should be under no illusions, however, that new weapons technologies could (or do already) represent threats to the doing of justice in the theatre of war. These threats include weapons systems that deliver indiscriminate, disproportionate or otherwise unjust outcomes, or that are operated within (quasi-)legal frameworks marked by accountability gaps. The temptation to abrogate (L. abrogare—repeal, evade) responsibility to the machine is also a moral threat to the doing of justice in the theatre of war.
Introduction
Robots, machine-learning, and digital technologies more generally, are bringing challenges to the weapons industry and the military, policy-makers and governments. 1 Levels of automation in weapons systems (will) vary, 2 but weapons systems with artificial intelligence (AI) capability, operated autonomously to find and attack humans, have probably already been deployed. 3 In March 2021, a United Nations report from the panel of experts on Libya indicated that an AI drone that attacked fighters may have acted on its own. To cite directly from this report: ‘The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability, i.e. that, once fired, is able to guide itself to its target’. 4 This report does not say more about the capability or whether there were civilian casualties or injuries, but it appears that the machine selected targets and decided to kill.
Existing International Humanitarian Law (IHL) holds state or other military actors responsible for whatever is done with any weapon but does not contain any explicit prohibition against lethal autonomous weapons or AI-enabled weapons systems. Article 36 under the Protocol Additional to the Geneva Conventions of 12 August 1949 requires a High Contracting Party to determine whether the employment of any new weapon would, in some or all circumstances, be prohibited by any applicable rule of international law. But, of course, existing IHL was not developed or tested for weapons technology whereby weapons ‘after initial activation, select and apply force to targets without human intervention, in the sense that they are triggered by their environment based on a “target profile”, which serves as a generalised approximation of a type of target’. 5 Yet future military success is likely to depend on operational advantage achieved through human-machine interaction and unmanned weapons.
Thirty countries are calling for a ban on fully autonomous weapons and for the creation of a new ban treaty to establish the principle of meaningful human control over the use of force.6,7 This list is notable, however, for who have declined. 8 The United States, the United Kingdom, Israel, Australia and Russia are opposed to a pre-emptive ban. China's position is less clear; they have called for a ban on use and not on the development of these weapons. 9 The UK's stated position towards fully autonomous weapon systems is that ‘the operation of our weapon systems will always be under human control and no UK weapons systems will be capable of attacking targets without this’. 10
The UK Ministry of Defence's position is as follows: It is our view that we should embrace and welcome technological advancements that can support compliance with International Humanitarian Law, as in doing so we can offer greater protection to persons who are not, or who are no longer, participating in hostilities. A legally binding instrument which hampers the legitimate development and use of such technologies would be counterproductive.
11
A plausibly universal ban has not yet been achieved within the United Nations framework via the CCW. Hence the derision heaped upon the High Contracting Parties to the CCW by non-profit organizations such as the Arms Control Association, Article 36, and substantial political pressure via other organizations. 12 In a significant move on 3 August 2021, however, the International Committee of the Red Cross (ICRC) recommended that states adopt new, legally binding rules to regulate autonomous weapon systems to ensure that sufficient human control and judgement is retained in the use of force: ‘It is the ICRC's view that this will require prohibiting certain types of autonomous weapon systems and strictly regulating all others’. 13 Religious voices, notably the Pax Christi peace movement, have campaigned for well over a decade alongside the ‘Stop Killer Robots’ campaign for a pre-emptive ban, for a range of ethical, legal and security reasons. 14 The Vatican warns that the delegation of powers to autonomous systems ‘puts us on the path of negation, oblivion, and contempt for the essential characteristics unique to the human persons and to soldierly virtues’. 15 In November 2018, Archbishop Ivan Jurkovic called for a ban on the development of robotic weapons which have ‘the capacity of altering irreversibly the nature of warfare, becoming more detached from human agency, putting in question the humanity of our societies’. 16 But on the other side, counter arguments have been voiced by those who emphasize that potential adversaries are far advanced in their development of these technologies, that there is no such thing as a fully autonomous system because a human is always in the loop somewhere, and that the necessary and appropriate ethical focus is on the human role in authorizing lethal force and controlling it. Debate focuses on whether an outright ban is feasible, whether new legal rules are needed or the weapons in question can be operated under existing IHL requirements because what matters is accountability and the weapons-control system as already the means by which human control is maintained. 17
Christian people, and global citizens more generally, have three possible routes ahead: (i) lending support to political pressure for an outright ban on the development and use of LAWS in the hope that all countries of the world, including the major military powers, become signatories; (ii) lending support to political pressure for a legally binding international treaty agreed by many countries of the world, including the major military powers, that sets out agreed parameters within which autonomous weapons can be developed and used; (iii) accepting that political stalemate leads to ‘a non-binding political statement or to a position of permanent stasis’. 18 How closely the practices of Christian pacifism and just war reasoning map on to options (i) and (ii) is an open question, but both will surely keep company for good part of the way in striving for an outright ban. 19 Option (iii) would, I suggest, be unacceptably neglectful of the expectation upon all Christian people to be peacemakers and proactive in weapons control and matters pertaining to control over weapons; persistence is required (Lk. 18:1-8). 20
Pope Francis wrote in Fratelli Tutti of his concerns about the development of nuclear, chemical and biological weapons, and the enormous and growing possibilities offered by new technologies such that, in the words of Pope John XXIII, ‘it no longer makes sense to maintain that war is a fit instrument with which to repair the violation of justice’.
21
In this article, I do not consider nuclear, chemical and biological weapons, the inherently indiscriminate nature of which makes limited war more-or-less impossible, but concentrate on LAWS. At issue is whether options (i) and (ii) above are available to Christian people, or only option (i). The following questions are addressed:
Do new weapons technologies that include some level of automation make it prohibitively difficult to maintain the just war tradition (JWT) as a proposal for doing justice in the theatre of war? What about this weaponry is peculiarly unethical? Can/should the JWT evolve as jus in silico? If so, how? What is the ministry of the Church and the peculiar responsibilities of Christian ethics amidst these challenges?
At its broadest, the position developed is that the JWT in Christian perspective remains a proposal for doing justice in the theatre of war and is needed for ethical consideration of the prohibition and regulation of non-prohibited LAWS. Those working in the JWT should be under no illusions, however, that new weapons technologies could (or do already) represent threats to the doing of justice in the theatre of war. These threats include weapons systems that deliver indiscriminate, disproportionate or otherwise unjust outcomes, or that are operated within (quasi-)legal frameworks marked by accountability gaps. The temptation to abrogate (L. abrogare—repeal, evade) responsibility to the machine is also a moral threat to the doing of justice in the theatre of war.
Key Ethical and Legal Challenges Posed by New Technologies
To address the question of whether new weapons technologies that include some level of automation make it prohibitively difficult to maintain the JWT as a proposal for doing justice in the theatre of war, it is necessary to consider key ethical and legal challenges posed by new technologies. The challenges resulting from new weapons technologies may be broadly categorized into three groups. First are the legally-framed operational issues of the kind that might be addressed by Parties to the CCW for the lawful constraint of weapons capability. 22 Second are the global political issues pertaining to how AI-enhanced weapons systems are likely to lower barriers to entering conflict in ways that undermine the international rule of law, especially in an era of ‘persistent competition’ below the threshold of war. 23 Third are personnel-focused issues, that is, those bearing upon the experiences of the serving military. This article is part of a larger project in which I attempt to consider all of the above. In the next section I consider briefly definitional issues wherein anthropomorphism tends towards the abrogation of responsibility (i.e., anthropocentric terminology applied to technology serves to reduce or even eliminate the spaces necessary for human agency, perhaps even to disguise that elimination), before turning to the detachment of war fighting from human agency and related concerns about accountability.
Definitions of Autonomy
All ethical concerns in this area begin to some extent with definition(s), but debate is hindered by the lack of intergovernmental agreement on the definition of autonomous weapons systems. This matters because ethical issues are entailed in definitions. In 2017, the UK Ministry of Defence paper ‘Unmanned Aircraft Systems’ distinguished between automated and autonomous systems:
Automated system Autonomous system
In the unmanned aircraft context, an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable.
An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.
24
This distinction was described by the House of Lords Artificial Intelligence Committee report (HL Paper 100) ‘AI in the UK: Ready, Willing and Able?’ as ‘a relatively unusual distinction’. 25 The report further listed definitions of autonomous lethal weapon systems used by Austria, France, the Holy See, Italy, the Netherlands, Norway, Switzerland and the USA, and observed: ‘none would appear to set the bar as high as the UK. All of these definitions focus on the level of human involvement in supervision and target setting, and do not require “higher level intent and direction”, which could be taken to mean at least some level of sentience’. 26 In other words, the House of Lords Select Committee was critical of the UK's unusual definition and recommended that the UK definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world: ‘Without agreed definitions we could easily find ourselves stumbling through a semantic haze into dangerous territory’. 27
With the UK’s definition being situated so far in the technological ‘future’ (to the point of perhaps being unrealizable, by the MoD’s own admission), statements such as ‘we have no plans to develop or acquire such weapons’ could appear progressive without actually applying any constraint on the UK’s ability to develop weapons systems with greater and greater autonomy. 28 Indeed, brief comparison with the definition of an autonomous weapons system adopted by the Holy See further makes the point. In a working paper submitted to the United Nations office at Geneva in April 2016, the Holy See defined an autonomous weapon system as ‘a weapon system capable of identifying, selecting and triggering action on a target without human supervision’. 29 No reference is made to ‘higher-level intent and direction’ but it is different again from that used, for instance, by Austria: ‘Autonomous weapons systems (AWS) are weapons that, in contrast to traditional inert arms, are capable of functioning with a lesser degree of human manipulation and control, or none at all’. 30
In 2020, the Oslo Manual on Select Topics of the Law of Armed Conflict: Rules and Commentary, authored by Yoram Dinstein and Arne Willy Dahl, offered this definition: For the purposes of this Manual, an ‘autonomous’ weapon system is a weapon system that is programmed to apply human-like reasoning to determine whether an object or person is a target, whether it should be attacked, and if so, how and when.
31
Note here the language of ‘human-like reasoning’, that is, the supposition that machines have internal states that correspond to human reason. Ethical dangers attaching to anthropomorphism in the context of decision-making are considerable. Anthropomorphism which supposes ‘higher-level intent’ or ‘human-like reasoning’ in an ‘autonomous’ weapon system must be challenged not least because it potentially undermines the CCW principle that accountability cannot be transferred to machines and could influence individual and societal attitudes toward machines which surpass human ability in computational problems, data search capacity, surveillance, the transformation of information into usable knowledge, and more.
For present purposes, I adopt the following EU definitions: ‘AI system’ means a system that is either software-based or embedded in hardware devices, and that displays behaviour simulating intelligence by, inter alia, collecting and processing data, analysing and interpreting its environment, and by taking action, with some degree of autonomy, to achieve specific goals; ‘Autonomous’ means an AI system that operates by interpreting certain input, and by using a set of predetermined instructions, without being limited to such instructions, despite the system's behaviour being constrained by and targeted at fulfilling the goal it was given and other relevant design choices made by its developer.
32
These EU definitions avoid anthropomorphic terms for machine-learning systems in which similarities to human cognition are ‘vastly overstated and narrowly construed’. 33 A primary concern, however, remains the anthropomorphism that tends towards the abrogation of responsibility to the machine, thereby raising questions about the meaning of ‘human agency’ and ‘accountability’.
Anthropomorphism
It is commonplace and almost unavoidable these days to describe everyday AI in terms of human capabilities and traits. Alan Turing's 1950s ‘imitation game’ (now called the Turing Test) tested a machine's ability to exhibit intelligent behaviour by whether it was equivalent to, or indistinguishable from, that of a human. 34 Today, owners of smartphones ask questions of the cloud-based voice services and hear a natural-sounding voice telling us the weather forecast or traffic conditions ahead, or might watch the Boston Dynamic robots perform dance moves and exclaim with surprise at how the robot moves its ‘hands’ and ‘feet’. 35 Casual anthropomorphism is probably unavoidable and relatively harmless. Of concern here is not the understandable tendency of describing how a machine picks up an item or moves with the language of ‘hands’ and ‘feet’ but interactions with, and inferences from, computational programs that could make users vulnerable to being steered in particular directions when making decisions. 36 It is that machine-learning, simulation, and computing infrastructures described as AI employing mathematical systems that act and think like humans implies that these machines have minds which do things that humans do, only better. Understanding AI through the lens of human mental features risks reducing it to a sort of replica of the human mind and leads to a flawed and ultimately limited ethical analysis of the issues AI raises’. 37 Anthropomorphism can tend toward the placing of blind faith in the accuracy of systems whose decision-making processes are not available to scrutiny (see ‘black box’ problem below); the anthropomorphic language used of weapons systems risks masking important limitations intrinsic to machine-learning and making it easier to convince users to trust their capabilities. 38
Identifying appropriate descriptors for machine-human interaction in war fighting is a larger topic than can be tackled here. For the moment, my plea is for descriptors that are accurate and that resist the implication that neural-network training or ‘deep learning’ in weapons systems is a form of ‘human-like’ reasoning. Getting the language right is one step toward not abrogating responsibility to the machine, creating false expectations around the e-trust relationship that is appropriate with a machine, or reducing ethical reasoning to probability calculus and risk management. 39 This is especially important as machines are programmed to make choices recognized as morally good rather than morally bad, and anthropomorphic descriptions of robots as morally good are becoming familiar. 40 For instance, Duncan MacIntosh, in his discussion of ‘fire and forget’ weapons systems writes: ‘For, given the choice between control by a morally bad human who would kill someone undeserving of being killed and a morally good robot who would kill only someone deserving of being killed, we would pick the good robot’. 41 His contentious point is that compliance with IHL can and should be encoded. Indeed, MacIntosh goes further to argue that it is not always morally better for a human decision to be proximal to the application of force.
In the larger project of which this article is part, the challenge of why and how to denounce the reduction of ethics to technical practice—what Pope Francis calls the ‘technocratic paradigm’ 42 —is taken as part of a discussion of why and how ethical reasoning isn't just about applying rules and calculating possible outcomes, as if reducible to an exercise in logic or mathematics, but is about relationships with God and neighbour. That Al already outperforms humans in various tasks of description and prediction, can be programmed to discriminate between objects etc., is potentially of tremendous benefit. But machine-learned ability to acquire and apply knowledge in decision-making is not (yet) the same as having a human conscience and developing the relationships and character traits required more broadly of human decision-makers; the structure of the act of judgement, which includes consideration inter alia of end, object, means and circumstance, is about more than the tasks for which machines can be programmed. 43
As a first step, it is important to be clear that the so-called ‘blackbox problem’ where weapons systems are designed such that unpredictability and nonavailability to scrutiny are built-in, is not the same as the problems of human decision-making, for example, a split-second decision about whether or not to fire taken under extreme pressure in urban warfare; the descriptor ‘human-like reasoning’ used of machine-learning processes can potentially obscure the differences between these decision-making processes. Algorithms allow software to learn from patterns or features in the data. So-called neural-network training or ‘deep learning’ in weapons systems will have ‘learned’ from training data how to distinguish between a spade and a rifle, a military tank and a hay cart, or suchlike. The computer might be given 1000 or 100,000, or millions more, pictures of spades and rifles, military tanks and hay carts, until it is capable with a higher level of predictability of identifying the correct object. That a human's decision-making processes might be biased (for good or ill), clouded by fear, anger, revenge, and so on, is a relevant consideration. But how ‘human-like’ this process is remains open to question. So too is the machine's supposed ‘human-like’ capacity to deal with unpredictability, novelty in the real-world situation, anomaly in new data, and more. At the least we must say that the inaccessibility of decision-making routes to subsequent analysis, namely, the impossibility of knowing, either before or after the event/output, how the machine processed the data to reach its decision, does not equate to being ‘human-like’.
Detachment of War Fighting from Human Agency
If we understand the problem of agency broadly as losing sight of connections between agent, act and consequence, then the problem is not exclusive to the military; linear connections from agent to act to consequence are increasingly untraceable not only in AI but in many features of our era. 44 In the context of new weapons technologies, the problem of agency has peculiar force because of the demand before God for accountability for the taking of human life (Gen. 9:5-6). 45 Has a moral line been crossed when the machine processes information and ‘decides’ to fire? Are machine-executed acts non-agent ‘acts’ (necessarily) like trees falling in forests, rocks tumbling down hills, computers ‘acting’ in inscrutable ways detached from human control? Is this moral line such that Christian ethics removes itself from the debate to concentrate only on ‘ban killer robots’ campaigns? Is the reality more complex such that concentrating only on ‘ban killer robots’ campaigns would be a different type of derogation of responsibilities?
This cluster of problems faces us with a choice. Two potentially defensible routes are available: (1) the just war tradition is not capable of addressing issues raised by LAWS because the technology entails inherently unethical decisions and actions (mala in se); 46 (2) the just war tradition is capable of addressing issues raised by LAWS because of decision-making power with respect to choosing the lesser of evils (minima de malis) and should do so by striving for guiding principles and international regulation to govern the development and use of this technology. (No useful purpose is served by discussing whether the JWT is capable of evolving without either advocating an outright ban on LAWS or striving for regulation; future wars and armed conflict are inconceivable without this technology.)
The Stop Killer Robots campaign is clear that ‘we are crossing a moral line’, ‘no one would be safe’, and that ‘humans may fade out of the decision-making loop in certain military actions’. 47 Also clear is the ICRC concern that ‘their normal or expected circumstances of use, could not be sufficiently understood, predicted and explained’. 48 ‘The user of an autonomous weapon system does not choose the specific target, nor the precise time or place that force is applied. This process risks the loss of human control over the use of force and it is the source of the humanitarian, legal, and ethical concerns’. 49 On the other side, at the nation-state level, counter arguments have been voiced most clearly by the United States of America, notably in the U.S. Statement at CCW GGE Meeting: Intervention on Appropriate Levels of Human Judgment over the Use of Force, where the key issue identified was controllability of the weapon system and appropriate levels of human judgement over the use of force. 50 The United States focused on the human role in authorizing lethal force and controlling it: ‘States will not develop and field weapons that they cannot control’. 51 Similarly, the understanding of ‘meaningful human control’ is problematized by those for whom machine autonomy using neural network training and ‘deep learning’ is understood to be under adequate human control. For McFarland and Galliott, for instance, human control is applied by militaries prior to firing. 52 The problem of ‘meaningful human control’ is met, they imply, by the human control exercised in the training, pre-testing, exercise of the precautionary principle, defining of very restrictive and pre-approved attack targets etc., and the moral imperative is to engage technology in enacting accountability. ‘States will not develop and field weapons that they cannot control’. 53
Unpredictability has too often been practised as a political strategy for much hope to be placed in its utter undesirability for militaries.
54
Indeed, as Oliver O’Donovan has observed, a pressing concern over autonomous weapons is that these systems could be, or perhaps could later become, another example of the strategy of putatively removing responsibility from the actor.
55
Whether predicted and desired, or unpredicted and undesired, the categorical unacceptability of unpredictability is why there is prima facie moral attractiveness about the idea of a simple ban on making or possessing such weapons. Consider again the ‘fire, forget and find’ incident reported by the United Nations panel of experts on Libya with which we started: On 27 March 2020, the Prime Minister, Faiez Serraj, announced the commencement of Operation PEACE STORM … The enhanced operational intelligence capability … allowed for the development of an asymmetrical war of attrition designed to degrade HAF ground unit capability … The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.
Libya descended into political chaos and violence following the overthrow of President Muammar Gaddafi a decade ago. ‘The country became divided between two rival administrations, with the internationally recognized Government of National Accord (GNA) based in the West, while the self-styled Libyan National Army (LNA) controlled large areas in the East’. 56 The ‘fire, forget and find’ capability was exercised by forces backed by the government based in Tripoli against Libyan National Arab Army units under the command of Field Marshal Khalifa Haftar. The report from the panel of experts on Libya reports that a drone, described as ‘a lethal autonomous weapons system’ powered by artificial intelligence, hunted down and remotely engaged enemy militia fighters as they ran away from rocket attacks. Much remains unclear, notably, as Zachary Kallenborn observes, ‘whether the drone was allowed to select its target autonomously and whether the drone, while acting autonomously, harmed anyone’. 57 Whether or not the first time, it appears that the weapon operated on software-based algorithms ‘taught’ through large training datasets to classify objects, and that the weapon fired on targets ‘without data connectivity between the operator and the munition’ at the point of firing. 58 How predictably it selected targets is difficult to establish.
Consider further an incident reported by Jennifer Gibson, who works for Reprieve, a UK legal action non-governmental organization, 59 namely the case of Faisal bin ali Jaber whose brother-in-law and nephew were killed in a drone attack on the basis of suspicious behaviour based on signals intelligence run through algorithms. Faisal's brother-in-law, Saleem, was known for speaking out forcefully in his sermons against Al Qaeda, and his nephew, Waleed, was a policeman. Shortly after one sermon, three men demanded to talk with Saleem. Suspicious that they might be Al Qaeda, Saleem took the meeting outside where he thought it would be safest. As Gibson says: ‘Algorithms, at their best, merely tell us about relationships. They don't tell us whether Faisal's brother-in-law is meeting the men because he's planning an attack with them, or instead, if the meeting is to explain why Al Qaeda's ideology is wrong’. 60
Here, then, is the challenge: whether to lend support to political pressure only for an outright ban, or also to a legally binding international treaty for the regulation of weaponry that remains outside a ban? If the latter, questions about whether and/or how the JWT should evolve as jus in silico become pressing and two broad categories of question are in play. The first are quasi technical and relatedly ethical, for example: Was the problem due to the use of algorithms per se or the quality of the Social Network Analysis regarding the importance attached to connections rather than the content of those connections? Could these problems have been addressed ahead of time? Is the challenge now to adjust from thinking about weapons as singular instruments, for example a sword or pistol, to larger systems in which humans and machines interact? Hence the ICRC asks:
What is the level of human supervision, including the ability to intervene and deactivate, that would be required during the operation of a weapon that can autonomously select and attack targets? What is the level of predictability and reliability that would be required, also taking into account the weapon's tasks and the environment of use? What other operational constraints would be required, notably on the weapon system's tasks, its targets, the environment in which it operates (e.g. populated or unpopulated area), the duration of its operation, and the scope of its movement?
61
Such questions about degrees of autonomy and what might constitute the application of responsibility become critical if the JWT is to further investigate what an outright ban might mean, what weapons would be outlawed, and why, and what kind of regulatory regime might be viable. If increased adherence to the principles that underpin existing IHL (namely, humanity, necessity, distinction, proportionality) were potentially achievable using these weapon systems, their deployment might be deemed morally preferable to other systems. If, however, levels of unpredictability and unreliability were accepted or programmed-in, in effect, as a means of terrorizing the enemy, their use might be deemed abominable. Levels of predictability and reliability, and the degrees of autonomy and of detachment from selection of the target, are of utmost moral significance.
The related, second set of questions follows from the quasi technical and is overtly theologico-ethical: Has the JWT reached the end of the road because the detachment of war fighting from human agency is too great and robot-inflicted death intrinsically evil (malum in se), or can/should it be accepted that no fully autonomous weapons system exists because there are always human beings somewhere, and that evil is to be minimized (minima de malis) in the context of a justified war by ensuring that weapons systems are held to account? Should LAWS using neural network training and deep learning be regarded as inherently indiscriminate because using network training to select targets is unavoidably unpredictable, 62 or under meaningful human control and subject to human agency by virtue of the programming, testing and subsequent accountability mechanisms? How is such a decision to be made?
‘Put your sword back into its place; for all who take the sword will perish by the sword’ (Matt. 26:52 NRSV). Jesus’ words were interpreted by Justin Martyr, Tertullian, and many others in the early Church, to preclude fighting against enemies and military service for Christians, and these words demand attention again. My (implicit) suggestion in this article is a rereading of these verses in this context of new weapons systems where a critical issue for weapons control is putting/not putting/not attempting to put decision-making and control beyond the reach of divine address. ‘The serpent deceived me, and I ate’ (Gen. 3:13). ‘The machine took its own decision and fired’. Is neural-network training or ‘deep learning’, in effect, an attempt to put decision-making beyond the reach of divine address? The divine command: ‘Put your sword back into its place’ is meaningful only when spoken to an agent capable of obeying. Are increasing levels of autonomy in weapons systems somehow an attempt to escape appropriate lines of accountability from agent to act and vice versa?
In addressing these questions, there is no denying that the distancing of human agency from the weapon at the point of lethality is the peculiar evil in question; loss of control obscures the line of accountability that leads back from the means to the agent. Nor is there any denying that autonomy is one of the emerging technologies identified by nation-states as a ‘key contributor to driving economic growth and delivering wide-ranging benefits for society’, and is being recognized by nation-states as having many potential applications within a defence context. 63 Automated drones are relatively cheap to purchase for potential adversaries and ‘[a]t root, autonomy is just a matter of programming the weapon to fire under given conditions, however simple or complex’. 64 The Christian pacifist and some JWT reasoners might choose to disengage because the distancing of human agency from lethality renders this weaponry categorically different from other weaponry. Others walking in the JWT (myself included) will locate the problem of agency squarely with human beings, within a human chain of accountability, and concentrate on giving that accountability meaningful legal force. From this position, to desire an outright ban and to prepare for a legally binding international treaty that sets out agreed parameters within which such weapons can be developed and used are not mutually exclusive. But those who recognize the need to prepare for the unavailability of an outright ban are called upon to develop the criteriological function of the tradition with respect to new weapons technologies because such will be required for any serious regulatory or, indeed, prohibitive attempt.
Future Work
All this matters, of course, because weapons control and control over weapons determines not only the way war is fought but also sometimes whether war is fought. 65 Armed conflict that is less and less visible to electorates and safer for military personnel is potentially more acceptable to governments. The ‘liberal conscience’ seems to require Western states to fight in ever more bloodless and ‘humane’ ways, with the knock-on erosion of jus ad bellum restraints and, indeed, for maintaining the ethical distinction between war and peace. 66 Until such time that de-weaponization is achieved, however, we must ask if the JWT is ready to develop new models of responsibility and accountability. Much of today's law of armed conflict and IHL has religious roots stretching back to the great twelfth-century Jewish rabbinic figure Maimonides, early Medieval church lawyers, comparable legal literature of the Islam, and beyond. Yet the recently renewed visibility of religion in public life, nationally and internationally, is yet to impact these debates. What, then, are the prospects for jus in silico?
Answers turn on whether new weapons technologies catastrophically undermine the JWT as a proposal for doing justice in the theatre of war. This in turn depends upon whether responsibility for actions performed with weapons, and the capacity to disarm them, is abrogated to the machine, and whether clear lines of accountability are in place for control over weapons. Working models of responsibility in deployment and accountability that run from procurement and design through political decision-makers to shareholders and electorates is beyond our scope here. But it is important that we begin to see the issues and ask the necessary questions. Much work lies ahead. For the moment, we note that the Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems adopted by the 2019 Meeting of the High Contracting Parties to the Convention on Conventional Weapons affirm: (b) Human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines. This should be considered across the entire life cycle of the weapons system.
‘Human responsibility’ appears to be used expansively to mean that a person is morally and professionally answerable for actions performed and for knowing the purposes they seek to achieve. The implication appears to be that they are potentially subject to punishment for actions that are unlawful or otherwise contrary to the obligations attaching to their role. ‘Accountability’ pertains to the processes and norms that hold a person legally liable.
67
What this principle means in practice is the critical point at issue. In the ‘fire, forget and find’ incident in Libya mentioned above, who should be held to account for determining whether the act was unlawful? The Guiding Principles also state: (a) International humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems.
68
As given, however, this principle does not test the proposition that existing IHL might not be sufficient to address new issues arising from the potential use of such weapons systems. 69 For instance, if LAWS using neural network training and deep learning are deemed to be at least potentially under meaningful human control and subject to human agency by virtue of the programming, testing and subsequent accountability mechanisms, should norms and standards be set for predictability and reliability in the achieving of discrimination and proportion? What peculiar challenges pertain to interpretation of the principles of humanity and military necessity under such conditions? Who should be held to account in the event that an act is found unlawful: the automated drone, the military commander, the politician who set the policy direction, the company or other body which sold the drone, the developer, the electorate (if appropriate)?
Clearly, the actions for which each agent in this list might be individually accountable (i.e., attributable as its efficient cause or author) are different, and judicial processes would have to reflect these differences. Not all persons in the accountability chain will have fired the weapon. Not all will have charged its battery, set its course and loaded the ammunition. Not all will have signed off the requisite funding. Not all will have designed the control system on the weapon, its human interface, the algorithm which selects a particular target type or supplied the training data. Not all will have voted for the government that funded the initial research or invested in the company that prepared initial designs. All might somehow need to be included, however, in a full picture of what accountability for the (un)lawful taking of human life looks like in this situation—to which end, a discussion of imputation (L. imputare ‘enter in the account’, from in- ‘in, towards’ + putare ‘reckon’) is perhaps a next step for an integrated model of accountability wherein chains of accountability hold. In the meantime, the peril of imprecision with respect to the meaning and practice of accountability is that judicial processes can be sidelined and easily ignored, and the very possibility of doing justice in the theatre of war undermined.
Footnotes
I am grateful to Joseph Capizzi and Oliver O’Donovan for comments on an earlier draft—the former in the discussion of agency especially and the latter as cited in the text. Any errors remain my own. The position advanced is not necessarily theirs. Sincere thanks also to Anicée van Engeland for sustained conversation over many months.
