Abstract
Autonomous vehicle moral dilemmas matter less for the particular outcomes of potential accidents than for their role in defining the values of the society we wish to live in. Different approaches have been suggested to determine the ethical settings that autonomous vehicles should be implemented in and identify the legitimate agents for making such decisions. Most of these, however, fail on theoretical grounds, facing severe issues related to moral justifications and compliance to the law, or on practical grounds, being insufficiently universal, action-guiding, or technically viable to be implemented. The analogy with the “trolley problem” has been extensively discussed. However, researchers have rarely tried to adapt this framework to autonomous vehicle cases or investigate how it could be used to address these issues. In doing so, this paper aims to answer the two key problems of autonomous vehicle dilemmas. With regards to the decision-maker, it rejects autonomous vehicle users’ choice-based models, showing the absurdity of both switch of control and adaptative preferences and arguing for common legislator-determined ethical settings. With regards to the decisions themselves, it criticizes both utilitarian views and those based on individuals’ criteria to suggest a deontologist rights-based approach. This allows for the defence of a morally coherent, regulatory compliant, explainable, and easily implementable framework capable of addressing all autonomous vehicle moral dilemma scenarios present in the literature.
Introduction
According to their proponents, autonomous vehicles (AVs) carry the promise of revolutionizing transportation due to their ability to move people and goods in a more inclusive, fluid (Vinitsky et al., 2018), ecological (Bertoncello and Wee, 2015), economical (Sivak and Schoettle, 2018), and safe manner. They also represent a tremendous business opportunity for car manufacturers and technology companies to drive mobility consumption behaviors toward a model dominated by shared mobility solutions (Gruel and Stanford, 2016). Public authorities have also shown interest in their potential to support economic growth (EC, 2018), reduce public spending (NHTSA), and free up public spaces currently used for parking (Lang et al., 2017). These expected benefits have convinced manufacturers to invest massively in AVs (Kerry and Karsten, 2017) and governments to accommodate national regulations to allow self-driving testing, sometimes at the price of leaving an unprecedented legal gap (Claybrook and Kildare, 2018). Although they may contribute to reducing the number of accidents, which are often said to be caused by human error (Singh, 2015), AVs will not realistically prevent all accidents. Thus, situations may arise where AVs have to arbitrate between different scenarios involving people's death or injury—for example, a case where a break fails near a pedestrian crossing. This calls for an investigation to determine
This paper aims to contribute to the literature by identifying the legitimate authority for making such decisions and defining the ethical settings. It argues in favor of common ethical settings defined by national legislators and introduces a practical role-based approach to address the issue, as opposed to characteristic-based frameworks. This former approach could be easily implemented to address AVs’ moral dilemmas by weighting interests at stake according to various levels of responsibility while strictly respecting individuals’ rights. The first section addresses the question of the legitimate decision-maker, criticizing both emergency control takeover and adaptative preference options to defend the necessity for common ethical settings defined by the legislator. The second section shows the limitation of characteristic-based approaches, rejecting Awad et al.'s (2018) individual criteria, including the utilitarian one, by demonstrating that they are technologically unviable and morally unfounded. Adapting Philippa Foot and Judith Thomson's original approaches to AV dilemmas, it finally suggests, in the third section, a practical role-based approach that can address any given situation discussed in the literature.
AVs require common ethical settings defined by legislators
When assessing a moral dilemma, the relevant moral agent must first be identified as both capable of, and legitimate for, making the moral decision but also responsible for its consequences. The literature has identified three potential decision-makers: manufacturers, consumers, and legislators. Either because they are not considered legitimate actors when making such decisions (Contissa et al., 2017; Sandberg and Bradshaw-Martin, 2013) or because bestowing on them such responsibility may discourage them from developing AVs (Hevelke and Nida-Rümelin, 2015), a consensus excludes manufacturers from determining their products’ ethical settings. The resulting debate questions whether AVs’ ethical settings should be entirely defined by the law or whether AV users should enjoy some degree of choice, either before or during the journey.
The practical dangers of a switch of control
The first option consists of implementing the possibility for a switch of control, enabling the vehicle to automatically return control to the “driver” when facing a complex situation. This is the solution currently implemented in all semi-autonomous vehicles (SAVs), and transferring this to fully AVs evidently seems appealing to the proponents of the “highest moral imperative” (HMI), who wish to roll out AVs on roads as soon as possible (Etienne, 2020). It would allow industrial actors to deploy self-driving cars without having to address their ethical issues, transferring control and responsibility back to the “drivers” in dilemmic situations, and asking them to take decisions just like they would with a non-AV (NAV).
While such an approach does not change the theoretical configuration of the moral decision (drivers are still the decision-makers), it makes no sense from a practical standpoint and is sub-optimal in terms of risk management. This is because it excludes two major benefits of AVs when compared to NAVs in complex situations: first, AVs’ option to coordinate their crashes to minimize the amount of harm produced (Gogoll and Müller, 2017); second, their capacity to rapidly alert other parties about the vehicle's maneuver, permitting them to react accordingly. In contrast, passengers would have to make difficult decisions they might be held responsible for while having less contextual information than the AV, lower capacity to process it, and being less able to correctly implement it. Dufour's (2017) experiment reveals that subjects being asked by a driver-assistant system to take back control of a vehicle moving at 110 km.h−1 to avoid an obstacle require a reaction time of up to 2.5 s in addition to a control takeover time of 4.5 to 6.0 s to correct the trajectory and stabilize the driving. As a result, 30% of subjects in his experiment failed to avoid the obstacle, with several of them engaging the vehicle in a dangerous situation and sometimes driving it in the opposite direction it should take.
An alternative approach, selected by Tesla, consists in interpreting the activation of semi-autonomous features as an acknowledgment from the driver of appropriate road conditions for the safe usage of such options (Ramsey, 2015). The delegation of control, however, does not extinguish the obligation of drivers to stay alert and remain in control of the vehicle—an imperative that was timely reemphasized by the company following the death of Walter Huang while driving a Tesla Model X with autopilot mode activated (Levin, 2018). However, Dufour also observes a significant hypovigilance after only 10 minutes of autonomous driving, reaching 4 points out of 9 on the Karolinska Sleepiness scale and coinciding with a significant increase in spectral power of alpha waves measured by electroencephalography. Subsequently, we cannot realistically expect all passengers to have the appropriate knowledge to determine when conditions are adequate to enable the autopilot nor to remain attentive to the traffic when evidence shows their attention significantly drops after a few minutes, leading to an overreliance on semi-autonomous devices.
Therefore, the switch of control is not only a sub-optimal option for risk management compared to what AVs could achieve, but due to AV passengers’ overreliance on the autopilot and thus lower reaction capacities than NAV drivers, it may also increase the number of accidents. Not only is the option to switch control dangerous for SAVs, but it should also be all the less considered an option because it cannot be implemented for many uses that AVs are expected to serve (e.g. public transportation and ride-sharing services or AVs without passengers for goods deliveries) and clearly undermines the benefits AVs claim to offer (the possibility to engage in a distractive activity). Finally, allowing AV passengers to switch control may not extinguish the manufacturer's responsibility. Consistent with Tesla's defensive argument, manufacturers could be restricted to only selling cars to drivers capable of using them and then to keeping drivers under watch. This may call for compulsory training when purchasing a vehicle and the constant surveillance of the driver's vigilance via an internal camera.
The illegitimacy of adaptative preferences
From a moral standpoint, assuming that the moral pluralism affecting the responses to AV moral dilemmas cannot be overcome and that programmers cannot legitimately make such decisions, Sandberg and Bradshaw-Martin (2013) conclude that the choice should be left to consumers. They suggest that manufacturers allow them to choose between two versions of AVs: one deontologist and one consequentialist. While the consequentialist could swerve to sacrifice one person instead of letting several die, the deontologist would never swerve, in what would be considered a deliberate act of killing. There are at least two objections to this solution.
First, it is too simplistic as it restricts the decision factors to only two criteria (i.e. the number of victims and the action of swerving) that are not even morally relevant, as argued in section two. Second, it results in a confused situation where there is no clearly identified moral responsibility. Sandberg and Bradshaw-Martin refuse manufacturers the legitimacy to determine such settings. It is, however, impossible for manufacturers to fully transfer this choice as they can be held equally responsible for granting users too much freedom in the selection of ethical settings (Lin, 2014a) and for leaving them with a too narrow set of options. Whereas consumers cannot be held fully responsible for choosing between a narrow set of options, manufacturers still assume some responsibility deriving from the selection of available options. Furthermore, the authors refute the legitimacy of manufacturers when making such decisions but do not question that of the consumers. One is legitimate when making decisions for themselves; they, however, are not legitimate when making decisions for others, and what is
Contissa et al. (2017) aim to overcome these limitations by proposing an “ethical knob” that permits passengers to choose between three modes when starting the AV: altruistic (preference for third parties), egoistic (moral preference for passengers), and impartial (no preference). The authors’ argument can be summarized as follows:
Ethical settings can be either fixed and pre-programmed by AV manufacturers or adjusted and personalized by users. If they were to be fixed, the market pressure would incentivize manufacturers to pre-program settings by prioritizing passengers’ safety to encourage consumers to buy AVs, resulting in a higher risk to pedestrians’ lives. Adjustable settings, in contrast, may avoid this situation, allowing AV users to opt for the neutral or the altruistic mode.
While it is quite audacious to assume governments would let the market dictate such ethical settings, Bonnefon et al. (2015) provide empirical evidence that most people would choose the egoist mode. The decision transfer from manufacturers to users may thus not change much about the risks that pedestrians are exposed to. According to Contissa et al. (2017: 369), it would nevertheless allow users to legally justify their sacrifice by invoking the state of necessity, considering that passengers do face a direct danger in contrast to the manufacturer, which “intervenes to save one or more persons.”
From a legal perspective, this argument may free AV passengers from charges when sacrificing people as long as the vehicle does not swerve. It would, however, neither allow the vehicle to change lanes to kill one person instead of several (Harris, 2020) nor allow it to swerve and kill pedestrians to save passengers from hitting a concrete barrier, as presented in the Moral Machine's (MM) scenarios (Awad et al., 2018). From a moral viewpoint, the distinction between swerving and continuing straight becomes completely irrelevant from the moment that the decision is taken beforehand, as argued in section two.
We shall acknowledge that Sandberg et al. and Contissa et al.'s approaches imply a two-level decision. On the one hand, despite the authors recognizing that manufacturers cannot legitimately define these settings, they allow them to do so by framing the set of available options. On the other hand, AV consumers make a second-order decision when selecting the ethical settings. In general, the more numerous the options, the freer the choice, and the more decision power is transferred from manufacturers to users. Nevertheless, even when consumers are not given any choice over the car's settings, they still decide whether to use an AV or not. From the moment they start the vehicle, they implicitly endorse its ethical settings.
Adaptative preference approaches should then be rejected. They introduce a system of shared moral decisions between AV manufacturers and users, whereas neither is fully legitimate to make such decisions, and nobody's responsibility could be engaged. It seems that only legislators may be legitimate enough to define these settings, assessing the interests of both AV passengers, NAV passengers, and pedestrians in light of everyone's rights. From a practical perspective, while the most probable scenario relates to traffic dominated by shared-mobility solutions, adaptative settings cannot be implemented in shared robotaxis or public transportations. There are, in contrast, concrete reasons to opt for common ethical settings, including the possibility of coordinating AV crashes to minimize the resulting harm and providing other parties with clear signals to best react to critical situations. Once one acknowledges that legislators represent the only legitimate option for such decision-making, another question which we do not discuss here is which agency should be in charge of doing so, especially considering that industry regulators such as the NHTSA may not be independent enough for the task (Bauman and Youngblood, 2017).
AV moral dilemmas call for a rights-based, not a characteristics-based, approach
Following Contissa et al.'s suggestion that adaptative preferences could include a “larger set of choices that can vary over time, depending on age and number of passengers, life expectation and other factors” (2017: 377), Awad et al. (2018) extended the options to nine criteria. The MM approach assumes that individuals’ value of life varies with their characteristics and suggests that the selected factors are relevant when arbitrating between whose life to save in case of unavoidable fatalities. Although some have raised criticisms (Etienne, 2020; Harris, 2020), the relevance of these criteria has not been refuted in detail, nor has the characteristic-based approach they are based upon, allowing for the consideration of additional characteristics. This section aims to invalidate these characteristics together with the whole approach they lay on and seek a better framework to approach AV dilemmas using Foot and Thomson's original viewpoint.
Criteria-based approaches: Illegitimate, inconsistent, inapplicable
The nine factors selected by the MM authors are (a) sparing humans versus pets, (b) staying on course versus swerving, (c) sparing passengers versus pedestrians, (d) sparing more lives versus fewer lives, (e) sparing men versus women, (f) sparing the young versus the elderly, (g) sparing pedestrians who cross legally versus jaywalking, (h) sparing the fit versus the less fit, and (i) sparing those with higher social status versus those with lower social status.
Factors (e), (f), (h), and (i) are openly discriminatory and violate the second article of the 1948
Factor (f) is often considered more acceptable, justified by the claim that younger people have had less time to enjoy life and a greater expected lifetime to do so. This argument, however, obliges its supporters to prioritize the sacrifice of people with severe incurable diseases associated with low lifetime expectancies (such as a 12-year-old girl with progeria) or systematically save the baby over its mother in case of difficult birth-giving. Interestingly, the objection also allows for a reversal of the initial claim as one could argue that it is precisely because someone's life is expected to be shorter that it is worth prioritizing. It is then impossible to make any moral arbitration between those who claim that someone diagnosed with terminal cancer has less to lose because they will soon pass away and those who argue that every day this person has is worth more for them than it is for anyone else precisely because they know their end is near. Additionally, older people may be considered to have more to lose because they have achieved more things and accumulated more knowledge. Sacrificing them also has greater chances of impacting other people's lives, such as their children or dependent relatives. Finally, there is great uncertainty regarding people's actual lifetime, and it is impossible to compare individuals’ capacity to enjoy life as this is far too subjective a matter (a rich 80-year-old man finding extraordinary bliss in distributing his fortune may better enjoy his last years than a 15-year-old poor orphan destined to a long succession of tragedies). Rejecting age as a relevant criterion for death arbitration, we shall then stick to the “incommensurability of life” principle (Santoni de Sio, 2017).
Understanding the risk implied by such distinctions, the German Ethics Commission of the Federal Ministry of Transport and Digital Infrastructure recalled that “in the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited” (FMTDI, 2017: 7). It nevertheless concedes that “general programming to reduce the number of personal injuries may be justifiable.” Factor (d) is popular among commentators and has been developed by Jeffrey Gurney (2016). The so-called utilitarian principle nevertheless faces a well-known problem, namely whether it should minimize the number of deaths or the amount of harm. Since MM scenarios tend to offer a decision-making framework for probable situations to occur, they cannot be addressed as pure thought experiments, leaving aside the uncertainty of the outcome. A coherent utilitarian approach then requires additional information to be collected to weigh the degree of injuries with the uncertainty rates and reframe dilemmas as such: “Should the AV drive over three people, with a 50% chance of breaking the first one's legs, 80% of killing the second, and 50% of plunging the third one into a coma, or three people with a 90% probability of making the first one quadriplegic, 40% of killing the second, and 70% of making the fourth one blind?” The moral assessment of this equation is not only practically impossible because of the lack of relevant information, it is also methodologically fallacious: Probabilities are relevant when drawing general conclusions on large populations or recurring events, not when predicting a particular outcome in exceptional situations associated to rare events, which is precisely what AV dilemmas are expected to be.
Even if possible and coherent, these functions would remain useless when conducting moral comparisons because individuals do not relate to injury in a universal and linear way. The subjective appreciation of injuries also refutes Rawlsian constructivist approaches (Leben, 2017) because the calculations people may produce under a veil of ignorance when trying to fairly allocate harm would be nothing else than projections of their own perception of injuries. Therefore, in addition to previous objections (Keeling, 2017), it should be noted that a unique distribution of the severity of harm cannot be associated with a fair consideration of agents” interests.
Finally, another critique of such a utilitarian approach relates to its narrow conception of the people
Lastly, factors (d), (e), (f), (h), and (i) often result in inconsistent combinations—those willing to spare people with a greater life expectancy may have to systematically spare women versus men while those willing to minimize the amount of harm may have to sacrifice young people, who are expected to have better chances of surviving or avoiding the shock. Furthermore, they are unviable from a technical standpoint: how could an algorithm identify pedestrians’ age, gender, and social status when it cannot recognize a woman and her bike (Kiley, 2018)? In contrast, factors (a), (c), and (g), associated with the agents’
A rights-based approach grounded in Foot and Thomson's foundational work
Foot (1967) outlines the original version of the trolley problem when investigating two close cases:
Building upon Foot's rights approach, Thomson introduces three distinctions relevant to our problem. First, she establishes variations in responsibility, thus in legitimacy to act, between potential decision-makers: it is not because agents can act that they are equally allowed to do so. By Foot's principles, she says, the driver (Edward) who faces a conflict of negative duties
Second, to Foot's variation of rights and duties in types, Thomson introduces a distinction in degrees based on contextual factors determined before a dilemma has occurred. The situation does not arise
Third, Thomson distinguishes between redirecting an existing threat and introducing a new one, prompting her to enunciate a
We now have a complete framework to address AV dilemmas, including an assessment of the agent's position (whose options may be impacted by their degree of responsibility), a distribution of rights per type and degrees among all parties involved, and the distributive exemption principle. We still need to clarify which scenario AVs fall into. The MM authors explicitly root their scenarios in the
In these scenarios, every agent is recognized as having a role defined by a position (passenger, driver, track 1, and track 2) and a degree of responsibility (defined as the distribution of causal influence that they assume in the dilemma's occurring). The possibility everyone has of ending up in any “position” of the dilemma justifies their right to be part of the
In Foot and Thomson's scenarios, the threat is produced by the trolley, whose brakes fail, for which the driver is responsible—that is why, whatever he does, he infringes on people's negative rights. Could he turn on a third track and self-sacrifice, both philosophers would certainly support this. Although not directly responsible for the breakdown, nor intending to produce this situation, he remains the trolley's captain and hence has a lesser right to be spared than others. However, in doing so, he would spare the track workers at the expense of passengers, who would die from the derailing. Toward whom does he have stronger duties? The driver could have made agreements beforehand with either the workers or passengers to assure them of protection in such situations. With AVs, manufacturers cannot make such agreements with all potential pedestrians, but they can with all AV passengers. Whereas manufacturers and passengers would not have the legitimacy to decide to always protect passengers instead of pedestrians, they could, however, legitimately agree for passengers to lower their right to be spared in exchange for the AVs’ benefits. Finally, jaywalkers should be considered to have lesser rights than all others. Just like the schoolboy, they are directly responsible for the dilemma's occurring, and although they do not introduce the instrument of harm (the AV), they introduce the conditions for this object to unavoidably become harmful. This results in the following theoretical distribution of responsibility: jaywalkers (strong) > passengers (weak) > lawful pedestrians (none).
Addressing AV dilemmas using a role-based approach grounded in partial responsibilities
The role-based approach suggested here to address AV moral dilemmas is based on a unique principle: the “cascade of partial responsibilities.” It consists in distributing weak responsibilities as degrees of agents’ informed decisions that are causally dependent on the dilemma's genesis and proportionally lowering negative rights among the different groups of agents capable of receiving the harm. It can address all situations discussed in the present literature by applying the following three-step questioning.
Is there one strongly responsible category of agents that can be sacrificed?
Among all categories of agents, if one can be identified as being directly responsible for the occurring of the dilemma, and it is possible to direct the harm against them to spare others, then AVs should be operated to spare other groups over this one. This is justified by the fact that these agents’ strong responsibility in causing the dilemma's conditions for occurring grants its members lesser rights to be spared than others. Although not intentionally responsible, they are causally responsible, and this cause can be associated with a free action resulting from an informed decision that agents make while fully aware of its potential risks, making them accountable for it. Jaywalkers should then not be spared by AVs at the expense of other agents regardless of any other criteria. As the only category of agents which had the opportunity to make a decision—crossing illegally, knowing that it is forbidden and may cause dangerous situations—they should be primarily responsible for its consequences. It is important to understand that they are not “sacrificed” because they are “breaking the law”—which would be a disproportionate sanction that AVs cannot legitimately give, and it is unquestionable that AVs should avoid them if possible—but because they have lesser rights to be spared than others. In fact, several countries do not recognize jaywalking as illegal, but all acknowledge the danger of doing so. Therefore, the normative value of the law is less important here than its universality: since everyone knows that jaywalking is dangerous, these agents are considered to have taken an informed decision. It is about assuming the consequences of a risk one agreed to take, not about being punished for a law they happened to break in a given country.
One may challenge this position based on traditional “marginal cases” arguments: children or visually impaired individuals would not have the same awareness of the danger of crossing illegally. In the first case, the lower responsibility of the children is usually balanced by the enhanced responsibility of their parents and educators, who are in charge of keeping them safe and under appropriate supervision so that other actors do not have to risk their lives when children behave unresponsively. In the second case, the computer vision algorithms developed for AVs could be used to build smart glasses that would let blind people know when they have stepped on the road. The strength of marginal cases arguments is that one can always identify a small category of individuals whose situation would not fit into the general scheme. What matters is not to rebuild the whole system from their viewpoint but to make sure that their interests are considered. The deployment of AVs should encourage us to rethink mobilities and increase awareness and responsibility, for instance, by strengthening children's supervision in school areas and developing new tools for visually impaired people, rather than making exceptions for these cases. First, because it is impossible in practice: AVs cannot detect pedestrians’ ages, distinguish between a child and a short person, or between a blind person and someone walking their dog. Second, because such an exception would open the door for others exceptions, such as drunk pedestrians. In diluting the agency of strongly responsible actors (in the sense of minimizing their capacity to make responsible decisions), we end up diluting others’ right to protect their lives.
Furthermore, “strong responsibility” should not only prevail when allocating harm between AV and NAV passengers or pedestrians but also when AVs only transport animals or are empty. Animals’ rights are traditionally approached either through direct or indirect moral obligations. In the former, even Singer (1975) considers that equal consideration of interests does not lead to equal consideration of lives, prompting us to always spare humans’ lives over animals.” This justifies sacrificing wild animals on roads even though they cannot be held responsible for ignoring the law. In the latter, we shall acknowledge that sacrificing AV-transported animals to avoid harming a jaywalker does not only hurt their interests but also their owners. In many countries, domestic animals are legally considered property, and the protection of both people's life and property is central in contractarian philosophers’ thoughts, still conceived as democratic States’ fundamental duties toward citizens. Our problem thus requires balancing jaywalkers’ negative right to not have their lives endangered and AV owners’ negative right to not have their properties destructed, thus answering the question of “to what extent may the fact that someone intentionally broke the law affect the incommensurability of the human life principle?”
Imagine an autonomous horse truck in good conditions lawfully driving Green Monkey (an American thoroughbred racehorse sold for $16 million in 2006) to a competition when a jaywalking woman suddenly appears on the road. The AV can either drive over her or operate an emergency avoidance maneuver that would capsize the truck and kill its occupant. What should it do? Based on the sole trade value of the horserace (regardless of its affective value and the impact of its death on the horseman's carrier), we may question how fair it would be for legislators to require AVs to always crash in such circumstances, whatever they carry, to avoid endangering someone deliberately breaking the law. Here again, Thomson's lights can help. In another thought experiment (1971), she imagines you wake up in a hospital bed plugged to a famous violinist. The doctor explains this person suffers from a fatal kidney ailment and his only chance of survival is to remain plugged for nine months to someone with a rare blood type, which only you have—this is why the members of the Society of Music Lovers decided to kidnap you and plugged you to him. Dragged into this situation by force, you should decide between either spending nine months in the hospital bed to save the violinist or leaving now and letting him die. While it would be extremely nice of you to stay, Thomson concludes that you have no moral obligation to do so. From this, if we have no moral obligation to sacrifice nine months of our life to save an innocent person when having no responsibility for their injury, it is not inconsistent to think we have no more obligation to lose $16 million to save someone who deliberately put themselves in a dangerous situation. Furthermore, while many people may refuse to sacrifice nine months of their life to save the violinist, they would accept to do so if offered $16 million (around ¼ of my students would do the former and ¾ of them the latter). While the incommensurability of life's value should dismiss individual criteria and utilitarian principles when assessing the question of harm allocation, the fact of deliberately breaking the law may, however, suffice to extinguish it. Culpability transforms negative rights into positive rights (to be spared from the threat we caused), and no principle states that someone's positive rights to protect their life should always prevail over someone else's negative rights to protect their property.
If not, is there one weakly responsible category of agents that can be sacrificed?
When none of the different categories of agents can be identified as directly responsible for the occurring of the dilemma, but one is indirectly responsible for it and it is possible to direct the harm against it to spare the others, the AV should aim to spare all other groups against this one. This is justified by the fact that their weak responsibility in the dilemma's conditions of occurring grants its members lesser rights to be spared than others. When an AV's broken brakes require choosing between driving over lawful pedestrians and swerving toward a concrete barrier, resulting in the passengers’ death, this latter option should be preferred. AV passengers should be considered as having lesser rights to be spared than lawful pedestrians for at least three reasons.
First, they are the only category of agents that manufacturers can practically (when buying or entering the car) and legitimately (they can trade their own rights but not pedestrians”) make agreements with. Prioritizing pedestrians over AV passengers appears to be the only practical way to establish an ethical framework of harm allocation that allows for the collection of consent and is based on people's responsible decisions. Climbing into an AV is then associated with a freely consented agreement to not be primarily spared in case of an unavoidable accident. Of course, no passenger agrees to die when entering the car, just like Thomson's workers did not sign up to die on the trolley track, but, unlike lawful pedestrians, they chose to take on the risk.
Second, when passengers agree to lower their rights, they do so in exchange for benefiting from the AVs’ advantages, which they manifestly value. It would certainly be unfair to allow a novel technology only at the advantage of a few people and the disadvantage of all others. Admittedly, those who choose to use AVs and enjoy their benefits should at least accept a greater part in their associated dangers. In addition, by supporting the deployment of AVs, the AV users’ community shares a part of the trolley driver's responsibility, being in some ways at the origin of the threat. It follows that what they agree to lower is not a negative but a positive right to be rescued from a threat they are partially responsible of, versus the negative rights of all other parties.
Third, and more specific to when the dilemma's occurring derives from an issue associated with the AV (e.g. the brakes’ failure), passengers and manufacturers share a certain responsibility in the failure. If any, who would be most responsible: the manufacturer, in charge of the car's conception and safety processes, the passenger, in charge of the car's maintenance or of selecting the rental company (which may vary in reputation), or the pedestrian, who just happens to be there? Of course, the responsibility should be shared between manufacturer and passengers, yet only passengers can pay the cost of this shared responsibility in such situations.
In contrast, the only argument provided so far in favor of alleviating AV passengers from the burden of the sacrificial priority is that it would discourage manufacturers and buyers (Bonnefon et al., 2016; Hevelke and Nida-Rümelin, 2015; Shariff et al., 2017) and thus delay the adoption of AVs, which are associated with a common good. This argument was extensively refuted on the grounds that it largely confounds business rationale and ethical principles (Etienne, 2021b; Lin, 2013).
Is there no responsible category of agents that can be sacrificed?
In this situation, it is not possible to direct the harm toward the categories of agents identified as either directly or indirectly responsible for the occurring of the dilemma and thus spare versus others. This is typically the case when the AV's brakes fail and passengers cannot be sacrificed: the AV can either drive over pedestrians legally engaged on the pathway or swerve to hit those on the other side. Here, the AV should keep straight regardless of whoever happens to be in front of it.
In contrast with Foot and Thomson's thought experiments, it has been argued that AV cases cannot avoid considering the uncertainty of the consequences and the degrees of injury, undermining the relevance of utilitarian approaches. The fairer alternative then consists of randomization. This possibility was discussed by Lin (2014b), who opposes it three objections. First, one of the main reasons for creating AVs is to make better decisions than humans, and a random decision mimics human driving as it is associated with an absence of deliberate choice. Second, as premeditated, AVs’ random decisions would not benefit from the same indulgence as human decisions. Third, randomization would evade responsibility and lead to thoughtless decisions. Writing in 2014, Lin's objective was to reject cheap tricks that manufacturers may use to dismiss ethical dilemmas and avoid responsibility. Since then, the MM has radically changed the debate, and the danger has shifted from avoiding this discussion to allowing discriminatory criteria to be set up and replacing collective decision-making with consensus algorithms such as the Voting-Based System. Moreover, Lin does not say that randomization cannot be relevant in some scenarios but that it is a wrong general principle to address all scenarios. In my approach, randomization is only preferred in one specific case, and it is not motivated by the wish to avoid AVs’ moral challenges but, instead, ensures the respect of individual rights, justified by the incommensurable value of human life.
When arbitrating between two groups of lawful pedestrians, the moral choice at the applied level is between hitting one or the other. At the normative level, it is about whether to make a decision based on irrelevant information (which is different than imperfect information). The random choice is blind to individual characteristics, which are considered morally irrelevant. Unlike risk-based approaches (e.g. Geisslinger et al., 2021; Kauppinen, 2020), fundamentally grounded in the consequentialist assumption that the distribution of harm should be minimized (whatever this might mean), randomization respects individuals’ rights, equally inviolable and ordinal (positive rights < negative rights, but two people's negative rights are not superior to one's). People do not have lesser rights when getting older or when they happen to be on the less-occupied roadside. In sum, the random choice is anything but a “thoughtless” decision without justification nor responsibility. It is not an absence of a decision but the decision to refuse to make an immoral choice based on morally irrelevant criteria. It rejects the assumption that, in the absence of better information, poor proxies are better than nothing for making moral decisions, and an empirical experiment confirmed that most participants support this statement. It substitutes a desperate attempt to save a few people's lives in hypothetical situations associated with events expected to be rare at the cost of illegitimate discrimination with a meaningful decision and a powerful signal to strengthen individuals’ rights and equally distribute chances not to be hit by the AV.
While the vehicle infringes one of the two groups of agents’ negative rights anyway, random choice also avoids infringing their negative rights not to be unfairly targeted by legislators. States may be recognized as having a positive duty to prevent citizens from dying whenever possible. This, however, is surely less stringent than their negative duty to refrain from trading citizens’ rights based on illegitimate criteria. It would also be wrong to believe that discrimination remains at a potential level until it is actualized so that nobody's rights would be infringed before the first accident. Consider Lea, an old woman who may one day be driven over by a broken AV to save a young boy
Finally, despite being the only morally valuable principle when addressing this scenario type, random choice is subject to a strong practical limitation. As a vector of uncertainty, the possibility of swerving does not allow agents to make the best decisions to self-protect. Seeing the AV driving in her direction, Lea may try to avoid it by jumping on the other side of the crosswalk. It would then be ridiculously tragic if the vehicle suddenly swerved, as the result of the random choice, and hit her on the other side. Random choice then conflicts with the manufacturers’ very first duty, which is to implement all the relevant signals alerting agents of the danger when the brake failure is detected and allow them to best react by minimizing the uncertainty surrounding the car's trajectory. Additional practical reasons supporting the preference for keeping straight include avoiding the risk of loss of control, increasing pedestrians’ chances of survival by hitting them frontally instead of on the side, and encouraging the improvement of emergency braking systems to reduce the shock speed (Davnall, 2020). A “keep straight” rule then seems to make the most sense for practical reasons, but would not it be opposed to our moral argument on the merits of randomization? It would not. While decreasing the uncertainty surrounding the car's trajectory, the “keep straight” rule does not lose any of the random choice's benefits, infringing no rights on the normative level. The only difference is that the
Conclusion
Debates on AV dilemmas matter less for the particular outcomes that potential accidents may produce than for defining the values of the society we wish to live in. This paper hopes to contribute to this discussion in three ways: by demonstrating the moral illegitimacy and practical absurdity of a switch of control and adaptative preferences, thus supporting common ethical settings determined by legislators; by presenting a comprehensive critique of criteria-based approaches and calling for a change in perspective, which can be found by adapting Foot and Thomson's original works; by proposing a three-step deontologist framework allocating harm proportionally to agents’ degrees of responsibility. This framework claims three main advantages when compared to other approaches. First, it is compliant with existing regulations and does not allow the possibility of illegitimate discrimination. On the contrary, it reaffirms the highest respect for individuals’ dignity by ensuring their rights. Second, its directives are clear and simple, allowing anyone to anticipate the vehicle's trajectory, are easily implementable in any AV without requiring additional innovation, and can satisfy all the scenarios discussed in the literature. Third, by giving priority to pedestrians, it incentivizes manufacturers to improve AVs’ safety to convince consumers rather than to develop rickety moral arguments to justify saving passengers for business reasons. HMI proponents wrongly believe that putting the risk on the passengers will delay AVs’ adoption. It may more assuredly be delayed by the absence of a coherent moral framework to address these dilemmas, one which seems only possible by associating passengers’ choice to be driven by an AV as a consent to the risks it entails.
Finally, AVs represent a paradigmatic shift in the way we conceive the circulation of people and goods. Acknowledging this calls for more than just deploying AVs in a “non-too-dangerous way.” We should seize this opportunity to rethink mobilities in such a way to make AVs more efficient but also safer and more inclusive, if only for visually impaired pedestrians. We emphasized here agents’ rights and responsibilities, associating jaywalking with a “strong responsibility” as this is the best proxy we have for deliberate decisions in such thought experiments. We shall, however, acknowledge that the notion does not have the same meaning in every country and that some urban infrastructures prioritize automobile traffic to the detriment of residents, often reflecting socioeconomic injustices and sometimes even leaving pedestrians with no choice than jaywalking to navigate their environment. In addition, I argued elsewhere that the only way to “solve” a moral dilemma is to prevent it from happening (Etienne, 2021a). As a result, if we were to want to live in a society with AVs, their deployment should be seen as an opportunity to change our environments to avoid as much as possible the possibility that such dilemmas would occur and to set up new rules that are fairer and more accepted by all agents so that it is easier for everybody to make informed decisions and be held responsible for these. When deploying new technologies such as AVs, it matters that we change our environment not only to accommodate AVs efficiently but also to create solutions to the problems they will bring about.
