Abstract
Technological developments enable modern cars to drive autonomously. The EU has embraced this phenomenon in the hope that such technology can ameliorate mobility and environmental problems and has therefore engaged in tailoring technical solutions to driving automation in Europe. But driving automation, like other uses of AI, raises novel legal issues, including in criminal law – for instance when such vehicles malfunction and cause serious harm. By only pushing for a technological standard for self-driving cars, are EU lawmakers missing necessary regulatory aspects? In this article, we argue that criminal law ought to be reflected in EU strategy and offer a proposal to fill the current gap, suggesting an approach to allocate criminal liability when humans put AI systems in the driver’s seat.
Keywords
Introduction
Over the last few decades, we have witnessed a digital transformation that has brought many benefits accompanied by multiple risks. 1 Driving automation provides a very good example. 2 More and more motor vehicles with driver-assistance systems or self-driving capabilities travel our public roads. In the past, autonomous vehicles were the stuff of science fiction novelists; now, self-driving cars are a reality. 3 Self-driving cars offer new opportunities to facilitate the mobility of disadvantaged persons (e.g., the disabled, the elderly, and persons too young to drive) and could help address the lack of public transportation in rural areas. Furthermore, interest in driving automation is propelled by the hope that assisted- and self-driving cars will make our streets safer in the long run. After all, an AI system’s attention span is not as limited as ours, they have no appetite for wine and beer, and they are expected to make fewer mistakes (at least in standard situations). 4 Although cars equipped with driver-assistance systems and self-driving cars can do much good and greatly increase transport safety and societal welfare, 5 one cannot disregard the possibility that those vehicles may cause not only property damage but also serious injury, including loss of human life. 6
Against this background, the EU has developed a digital strategy to make digital transformation benefit the people living in the EU; it contains a number of actions and initiatives, including some touching upon driving automation, such as new type-approval requirements for advanced driver-assistance systems and automated vehicles, 7 a proposal to regulate AI systems, 8 and two proposals 9 to adapt civil liability rules to the digital age. 10 Criminal liability, however, does not yet play a role in that line-up of initiatives. To date, EU policy documents suggest that the EU’s appetite for creating criminal prohibitions is rather limited. 11
The EU’s cautious approach to imposing criminal liability related to AI systems comes as no surprise. On the one hand, criminal law is seen as a last resort in both the Member States and the EU, and on the other hand, the EU can only use criminal enforcement mechanisms under specific circumstances. Indeed, since the 19th century, European liberal criminal-law thinking has been based on the principle of individual guilt, which necessarily revolved around the actions of humans. The potential criminal liability of non-human actors such as robots, as well as the potential criminal liability of those humans who created and/or used AI that has caused damage or injury or otherwise ‘committed’ a criminal offence, has only recently become part of the criminal justice debate in Europe. 12 Anchored in the principle of individual guilt, several countries (such as Germany) still reject the idea that corporations can be held criminally liable. Recently, however, we have seen inroads into the principle of individual guilt: some jurisdictions now entertain the concept of robot culpability, 13 while others have adopted legislation assigning criminal liability in human-robot interactions. 14
One of the key reflections in this context is whether the risks related to the increasing deployment of AI-enabled driving systems can be resolved by a technological approach (i.e., ‘legality by design’) or if they instead require a regulatory approach that would include new criminal laws. If we opt for a regulatory approach, the next step will be to decide if such regulation should be adopted at national or EU level. More and more vehicles operate on European roads with varying degrees of automation; to address that fact, the EU has already embarked on the path toward legality by design via harmonised technical standards for driving automation. In this article, we argue that the EU should complement those technical standards with rules addressing criminal liability. To this end, Sections II and III set the scene for our main analysis; we present the existing EU-level rules, both technical standards and the current liability regime, applicable to driving automation. Section IV points to the current gap in criminal liability and focuses on those cases where traditional criminal law doctrine does not offer adequate answers so that the intervention of the lawmaker is necessary. Section V will then examine the desirability of a harmonised EU approach to criminal liability connected to driving automation and the respective competence of the EU to establish minimum rules in this area. Section VI closes with our outlook on future developments if the EU Member States go down the road suggested in Section V.
Regulating driving automation in the EU: State of the art
Driving automation entails a number of technologies ranging from existing conventional (i.e., human-operated) cars equipped with advanced driver-assistance systems to fully automated driverless vehicles, which remain, for the moment, an aspiration. According to the classification system – generally regarded as the industry standard – developed by the Society of Automotive Engineers (SAE) there are six levels of automation a car may offer. 15
At Level 0, the human driver remains in complete control of the motor vehicle and is in charge of operating all of its driving functions (no driving automation technology at all). Level 1, i.e., the lowest level of automation, offers some driver assistance. The motor vehicle provides one single driver-support system that offers steering or braking/acceleration support (only one task at a time) such as adaptive cruise control or lane-centering assistance or lane-following assistance. Level 2 provides a more advanced driver-assistance system that can take over both steering and acceleration/braking in specific circumstances (partial driving automation). Neither Level 1 nor Level 2 systems replace the human driver; the human driver must remain alert and is obliged to actively supervise the automated support features at all times and intervene immediately if the environment or the system (warnings) demand it. We refer to Level 1 and Level 2 of automation as ‘Assisted Driving Systems’.
At Level 3, vehicles are able to drive themselves in certain circumstances (such as traffic jams) in the sense that longitudinal (braking and acceleration) and lateral (steering) dynamic driving tasks are automated; the human driver does not need to supervise the technology. 16 The driver must nevertheless be ready to take control when the vehicle notifies the driver to do so, particularly in the event of an emergency due to system failure (conditional driving automation). Level 4, often referred to as ‘high driving automation’, does not rely on any human interaction, as the vehicle is able to carry out all driving tasks and is programmed to stop itself in the event of system failure. Level 4 systems only work in limited circumstances (e.g., within certain geographic boundaries, or in certain weather conditions) and cannot operate unless all necessary conditions are met. This advanced technology can apply, for instance, in driverless taxis and public transportation services. Finally, Level 5 entails the highest level of automation: vehicles are able to operate by themselves in all conditions, without any need for human intervention once the vehicle is in operation and has been given its destination. We refer to Level 3, Level 4, and Level 5 as ‘Automated Driving Systems’. 17
EU harmonised rules on technical requirements
Vehicles equipped with Assisted Driving Systems have been sharing European roads for quite a while – as a matter of fact, many driver-support features are now standard on most new cars. 18 Because the car industry continues its rapid advances in vehicle autonomy, some Member States already require testing – based on Member States’ national law – for Level 3 and Level 4 Automated Driving Systems. In 2019, in light of this, the EU adopted Regulation 2019/2144, 19 which amended the EU’s Vehicle General Safety Regulation 20 to include a legal framework for different types of driving automation. 21 The New Vehicle General Safety Regulation has developed rules for a harmonised type-approval procedure that sets out uniform, obligatory standards for Assisted Driving Systems 22 and Automated Driving Systems of Level 3 23 and Level 4 such as urban shuttles or robotaxis. 24 At the time of writing the EU has not yet put in place a regulatory framework for Automated Driving Systems of Level 5. Although the EU seems to follow the six-level classification inaugurated by the SAE, it employs the term ‘autonomous’ to refer to Automated Driving Systems. More specifically, Regulation 2019/2144 refers to Automated Driving Systems of Level 3 and Level 4 – as defined above – as ‘automated vehicles’ and ‘fully automated vehicles’ respectively. 25 Therefore, for the purpose of this article the term ‘automated vehicles’ refers to Level 3 Automated Driving Systems, while ‘fully automated vehicles’ refers to Level 4 Automated Driving Systems.
Interestingly, the EU’s legal framework – established through the New Vehicle General Safety Regulation – addresses two distinct elements with respect to driving automation (without formally separating them in the relevant documents): one of the two elements sets out the obligatory technical requirements that driving automation systems must meet, while the other sets out the roles and obligations of the various actors involved in designing, developing, and deploying such systems in the common market (e.g., of the manufacturer, 26 the importer, the distributor).
At the time of writing, the EU still requires some degree of cooperation between the vehicle’s human driver and its driving automation systems, with the exact amount of such cooperation depending on the vehicle’s specific AI systems. In such circumstances, attributing criminal liability for damage or injury caused while the vehicle is operating has been complex. At Level 3, for example, automated-driving functions can only be activated under specific conditions (i.e., only on motorways on which pedestrians and bicyclists are prohibited, only at speeds up to 130 km/h), the human driver can override the system if he or she deems it necessary, or the system can alert the driver to immediately retake control of the vehicle at any moment. 27 Therefore, the human driver must always remain sufficiently vigilant to respond to a transition demand, the vehicle’s warnings, and mechanical failure 28 and take up the fallback position. 29 With respect to Level 4 automation, the current EU framework only permits a limited number of fully automated vehicles (1,500 vehicles per model per year 30 ) in certain use situations (e.g., shuttles on designated roads). Although fully automated vehicles are able to cope with any situations within their operating parameters 31 – including taking up the fallback position without human intervention – the system must request human takeover if its operating parameters have reached their outer limits (e.g., if it exits the predefined area in which it is specifically designed to function).
The current EU legal framework directly related to driving-automation systems is, however, likely to change once the Commission’s Proposal for a Regulation laying down harmonised rules on artificial intelligence (‘Draft AI Act’) is adopted. 32 At present the Draft AI Act is still being negotiated; on 6 December 2022, the Council adopted its common position on the draft bringing amendments to the Commission’s proposal (‘Council’s General Approach’) 33 and on 14 June 2023, the European Parliament adopted its negotiating position. 34 What is already clear at this stage, however, is that driving-automation systems qualify as high-risk systems 35 for the purposes of the Draft AI Act (i.e., AI systems that pose significant risks to the health and safety or fundamental rights of persons 36 ). In relation to high-risk systems, the Draft AI Act contemplates, inter alia, creating a set of horizontal mandatory requirements that such AI systems must fulfil, as well as clarifying the allocation of responsibilities and roles to the various actors. Although the Draft AI Act is not specifically tailored to driving automation, such mandatory requirements could affect the legal framework on driving automation.
As drafted, Article 6 of the Draft AI Act captures driving automation systems as they qualify as high-risk systems; 37 they should therefore theoretically fall under the scope of the Draft AI Act. However, because the Commission decided to adopt a sectoral approach in its proposal in order to avoid overlaps and duplications, it excludes driving automation systems from its scope of application, 38 but simultaneously requires the EU legislature to take into account requirements set out for high-risk AI systems when establishing the prerequisites for type approval for Assisted and Automated Driving Systems of Levels 3 and 4. 39 That being said, if the Draft AI Act is adopted, further measures are expected 40 in order to align applicable sectoral legislation with the AI Act to the extent such sectoral legislation is inconsistent with such requirements. 41 Among those requirements, an obligation to incorporate ‘human oversight’ is of particular significance for Automated Driving Systems. Article 14 of the Draft AI Act states that ‘High-risk AI systems shall be designed and developed in such a way, including appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use’, which is apparently intended to prevent harmful AI outcomes by inserting a human operator ‘in the loop’ to monitor the AI system’s operation and intervene if necessary. How that concept of human oversight can be implemented in the field of driving automation and how it can be reconciled with the increased level of autonomy of fully automated vehicles remains to be seen.
Fragmented EU rules on driving-automation-related liability
While the EU has harmonised rules and technical requirements for driving automation based on AI systems with the New Vehicle General Safety Regulation – with further rules expected to come if the Draft AI Act is adopted – rules on liability related to driving automation are rather scarce and fragmented. Indeed, driving-automation-related liability is addressed through various existing instruments at EU level, while the Commission tabled two proposals to adjust the liability regime to AI-systems.
To the extent the EU legal framework addresses who bears ultimate responsibility for the safety of an Assisted or Automated Driving System, it focuses mainly on the system’s manufacturer. 42 By and large, manufacturer obligations tend to arise either before or after the system is marketed to the public. Before a manufacturer can legitimately put an Assisted or Automated Driving System on the market, it must first obtain type approval, which obliges the manufacturer inter alia to demonstrate that its system meets current scientific and technical standards and that the system’s safety for road use has been adequately tested. 43 The responsibility of the manufacturer does not cease when the system is put on the market; the manufacturer remains responsible for the ultimate safety of the system and its continued compliance with the technical standards over its lifetime. To this end, the manufacturer is obliged, among other things, to collect vehicle data to monitor and analyse safety-relevant incidents/accidents caused by its driving-automation systems and subsequently report them to the competent authorities (a); he is obliged to manage potential safety-related gaps and update affected vehicles if necessary to remedy such gaps (b); if an Assisted or Automated Driving System already on the market does not conform to the EU framework, or was granted its type approval on the basis of incorrect data, or presents a serious risk to the health or safety of persons or to other aspects of the protection of public interests covered by the relevant EU framework, the manufacturer must immediately take necessary corrective measures (e.g., bring the system into conformity with then-current standards, withdraw it from the marketplace, issue warnings, recall defective systems for repair, or take other appropriate action).
When it comes to liability, the New Vehicle General Safety Regulation includes the well-established EU law-enforcement clause which requires Member States to provide effective, proportionate, and dissuasive penalties for infringements by economic operators 44 and technical services 45 of the said regulation. 46 The EU therefore currently leaves it to the Member States to decide whether infringement of the rules stipulated in the New Vehicle General Safety Regulation is sanctioned via administrative or criminal law.
While criminal liability rules and common EU provisions on criminal liability for offences connected to driving automation are outside the frame of the New Vehicle General Safety Regulation, EU law entails common provisions on civil liability. Under the Product Liability Directive (PLD), any natural person who suffers damage by a defective product should be entitled to compensation. 47 However, as the Commission itself has admitted, the applicability of the PLD to driving-automation products is questionable and, therefore, it is unlikely that a victim of an accident caused by, for example, an automated vehicle could take legal action against the vehicle’s manufacturer under the current regime if there is a defect in, or malfunction of, a driving-automation system. 48 To remove any uncertainty, the Commission tabled a proposal to revise the PLD 49 with a view to confirming that AI systems and AI-enabled goods fall within the PLD’s scope. 50 To complete the civil liability aspects of regulating AI-enabled systems, the Commission simultaneously tabled a second proposal for a Directive on adapting non-contractual civil-liability rules to artificial intelligence. 51 Unlike the PLD, which only covers a producer’s no-fault liability for defective products, the second proposed Directive is intended to harmonise certain rules for cases in which damage claims arise out of wrongful behaviour. Nevertheless, it is uncertain whether said proposed directive, if adopted, would apply to driving-automation systems, as Article 1(3) of the Commission’s Proposal provides that ‘This Directive shall not affect rules of Union law regulating conditions of liability in the field of transport’ (emphasis added).
In sum, EU policy in the area of driving automation is built on laying down technical standards that Assisted and Automated Driving Systems should comply with as well as establishing rules of civil liability, while the area of criminal liability remains – almost – untouched. The Draft AI Act, if adopted, will not change this situation as it does not contain rules on liability.
The silence of the EU legislature on criminal liability is somewhat surprising, given that Assisted and Automated Driving Systems, like all AI systems, are neither error- nor fool-proof, nor are the individuals who develop and deploy them in the marketplace, nor are the human drivers interacting with them. Accidents resulting in property damage, injuries, or even the loss of human life have happened, and will continue to happen. To address this lacuna, we must first ask whether criminal liability should form any part of the regulatory framework addressing the use of driving-automation systems and, if so, how that might be shaped. Once we consider those questions, we can discuss whether the resolution to those questions requires a harmonised EU approach.
Criminal liability gap in the context of driving automation
Criminal law is geared to humans, not to AI systems nor to human-AI interactions. In the logic of traditional criminal law, AI systems cannot be subject to criminal prosecution or punishment. The reasons are manifold: While a driving assistant may be able to learn, to read a traffic sign, and to make decisions about how to react when it encounters one, it is unlikely that it can meaningfully decide to comply with or violate the law 52 since AI systems cannot grasp the concept of having rights and obligations as a participant in public traffic. 53 We cannot hold them ‘personally’ responsible for any harm they may cause and we have no means to inflict pain on them. Thus, when further developments in driving automation allow human drivers to hand over more and more driving responsibilities to vehicles, chances are that victims of traffic accidents will less often have a criminal case to bring. Phrased differently: A responsibility gap naturally opens when driving assistants operate vehicles – with the risk of them causing harm to humans. 54 Leaving criminal liability unregulated in the context of the creation and operation of AI systems means that the new situation (shift of liability from human to AI; grey area of human-AI interaction) is left basically to the courts, which must solve these new challenges in criminal law.
Before moving forward, the question arises about the ambit of a future initiative for regulating criminal liability related to driving automation and what automation levels it should cover. To put it differently: in which cases does a liability gap arise that would, in turn, make intervention by lawmakers necessary? In this context, human-AI interaction and the ability, or rather necessity, of a human driver to retake control of the vehicle are key features. 55 The greater the control assumed by the driving system, the more difficult the attribution of criminal liability to the human driver will be. However, as described earlier, the boundaries in allocating the tasks between the driving systems and the human driver as well as the moment when control is handed back to the human driver might not always be that clear. When it comes to Assisted Driving Systems, the situation seems to be quite clear: the human sitting in the driver’s seat in the vehicle should supervise the driving tasks executed by the system, monitor the environment, and intervene immediately when necessary – if the environment or the system through warnings so require – and thus remain in constant active control of the vehicle.
On the other end of the spectrum, at Level 4 it seems that the automated system has full control of the vehicle as it takes all decisions, including to shut itself down in the event of system failure. By contrast, at Level 3 the coexistence of human driver and automaton is not that straightforward. 56 Although the driver may need to retake control of the vehicle, this would happen only if the system demands it (transition demand). That said, one can easily anticipate that cases will occur where the attribution of guilt will be far from clear. If the system – due to a system malfunction – failed to alert the driver to take control of the vehicle thus causing an accident, who is to be blamed for such an incident? On another note, who is responsible for an accident if the system does notify the driver of the need to take on control of the car manually but the driver does not manage to respond in time? It follows that the attribution of liability can be complicated in the context of Automated Driving Systems, and it will become all the more complex in light of the imminent adoption of the Draft AI Act, which will require human oversight, thus reinforcing the role of human drivers even in cases of Level 3 and Level 4 Automated Driving Systems. In these cases, it seems lawmakers will need to intervene to fill a gap that will not easily be bridged using traditional criminal-law doctrine.
In principle, there are two opposing positions on addressing the emerging gap in criminal liability that mark a wide spectrum of options: On the one hand, one can adopt the view that a certain amount of risk-taking is socially acceptable, for instance when – as in the case of driving automation – it is accompanied by the hope of a long-term benefit of greater traffic safety when AI and not humans drive a car. On the other hand, one could aim to fill the emerging liability gap with criminal provisions targeting those humans or corporations that create, use, or oversee AI systems. While the first approach cuts back criminal law as a regulatory scheme for public traffic, the second path, of increasing accountability, seems to fit better in the overall EU strategy of managing risk by a clear attribution of liability.
When regulating driving-automation-related criminal liability, the lawmaker must balance, on the one hand, the interest of citizens who could become victims of an automated-driving system’s malfunction in receiving redress for the harm they have suffered and, on the other hand, the reasonable claim of society at large for technological progress including the benefits that AI systems, and in particular driving automation, can provide. From a criminal justice viewpoint, the key question for the lawmaker is to decide who is to be held criminally liable for a harmful result where many actors have been involved and undertook measures to prevent the risks emerging from Automated Driving Systems.
The wide array of problems regulating criminal liability include aspects related to the demarcation of responsibility, a possible attribution of guilt (based on causality) and its proof, as well as a clear definition of duties of care for humans handing over the steering wheel to vehicles.
Criminal liability for intentional wrongdoing
Criminal liability for intentional wrongdoing when employing or designing Automated Driving Systems seems to be a rather clear case for criminal prosecution: If anyone designs or uses a self-driving car as a weapon or to commit crimes, that person must be held liable. If a human intentionally or knowingly programmes a robot so that it causes harm to a person, the programmer’s criminal liability can easily be established on the basis of traditional concepts of attribution and mens rea: The programmer commits the criminal act by using the robot – irrespective of its artificial intelligence – as a tool for carrying out the programmer’s intention, and does so with the requisite intent or knowledge. 57
Criminal liability for negligence
A liability gap emerges notably in the area of negligence, as criminal liability for crimes connected to Automated Driving Systems committed with negligence, even recklessness, is difficult to capture based on traditional criminal law doctrine. Indeed, domestic approaches differ considerably when it comes to excluding an attribution of guilt in a situation where, for a particular reason, a chain of events leading to a harm no longer appears to be the ‘work’ of the original actor, be it because a third actor intervenes (‘novus actus interveniens’) or because the harm at issue is the result of a ‘normal’ risk of daily life; or when it comes to limiting criminal liability by reducing the duty of care.
Novus actus interveniens
Criminal law theory typically does not consider an actor who causes a harmful result responsible if the careless conduct and the resulting harm are not linked in a way that sustains an attribution of responsibility. One reason for excluding liability in this situation could be the primary attribution of the harmful result to the autonomous act of another – be it a person (such as the car manufacturer, an individual engineer, or even the victim), an AI system as a novel actor, or ‘chance’ (which is another word for the normal risks inherent in living in our dangerous world).
This is a central issue when different actors cooperate or when a third person autonomously interferes with a causal chain of events and affects the causal link in such a way that a harmful result no longer appears to be the ‘work’ of one (original) actor. The idea applies in a situation where an assigned driver hands over the steering wheel to another or where the injured victim of a traffic accident – while in the ambulance on the way to the hospital – is ultimately killed by someone else who does not give way to the ambulance. Can that death still be attributed to the driver causing the first accident? In the context of driving automation, one could argue that no causal link can be established for fault in designing or training driving-automation systems if the human driver subsequently also is at fault when harm is inflicted, for instance if upon a request from the system the driver did not take control of the vehicle but instead was engaged in secondary activities (e.g., talking on the phone, watching videos) despite an obligation to be vigilant to system alerts. What matters here is the actual driver’s role when using AI systems designed for Level 3 and Level 4 Automated Driving Systems. 58 Countries that have adopted relevant laws appear to retain the default of driver responsability. 59 This is the reason experts see the risk of human drivers becoming a ‘legal crumple zone’ 60 for cars driving automatedly, i.e., the assignment of overall responsibility for risks materialized and thus misattributing a harm to a human actor who had limited control over the behavior of such vehicle. Also, the limits of attribution in cases where a third person intervenes are far from clear. 61 According to German doctrine, for instance, there is no absolute protection of a merely negligent first actor 62 from attribution of guilt, even if the injury was actually brought about by a third person. 63
For a lawmaker laying out a strategy for driving automation a clear understanding of who is responsible for what is important. If citizens observe the sudden swerving of a car onto a sidewalk where an elderly person stands, when no human is in the driver’s seat they will see this as the action of the car. The question thus might subsequently arise whether there is a need to have specific legislation in place for those who created or used such a vehicle, and whether these persons ought to be included in the chain of attribution or not. As long as vehicles cannot be held criminally responsible, the victim (and society) may face a responsibility gap. Such a responsibility vacuum might cause a precipitous drop in support for robotic inventions. These considerations counsel against generally absolving persons creating or employing AI systems of responsibility for harm caused by the robot.
Socially accepted risk
Another idea feeding into the theory of attribution which could impact criminal liability in the context of driving automation is ‘socially accepted risk’. 64 According to the idea of a socially accepted risk, a person is not criminally responsible if the harm at issue is the result of a ‘normal’ risk of daily life; in such situations the victim is expected to bear the harm without redress. This idea has been developed for various situations in our risk-taking society. 65
When Automated Driving Systems become part of everyday travel and thus the ‘normal’ risks of life, as other AI systems have already (e.g., internet search engines), the creators who comply with state-of-the-art 66 and the users who comply with relevant rules will not be criminally liable for generally foreseeable malfunctioning but only for harm incurred due to preventable construction, programming, or operating errors. 67
Establishing duties of care
Even before this time arrives, however, one might choose to limit criminal liability of operators by defining their duty of care, as is already the case in criminal law in the area of product liability. 68
But now many new issues arise: 69 Do we need to depart from the conventional tools of product liability, because a new concept of foreseeability and preventability is required? Could we argue that manufacturers take a substantial and unjustifiable risk of causing an injury when they decide to put on public roads a car that makes its own decisions? 70
In many civil law jurisdictions, such as German law, an actor is liable for criminal negligence if causation of a relevant harm (e.g., someone’s death or injury) can be attributed to the actor, if the actor could have foreseen the harm, and if the actor failed to exercise the due care necessary to avert the foreseeable harm. 71 The general idea behind responsibility for negligence is the actor’s failure to pay sufficient attention to the harm he or she may cause to others. Liability is imposed if the actor could reasonably have prevented the harm in question. Where even a diligent person could not anticipate harm is imminent, there is no wrongdoing and hence no punishability.
But are the standards of due attention and due care applicable to humans who interact with AI systems which make decisions their creators and operators cannot foresee in detail? A self-driving car, for example, must interact with an unpredictable and only partly ‘smart’ environment. Depending on its built-in learning-mode, the AI system will use all stored and incoming information to establish working predictions regarding its dynamic environment. Operators know that AI systems will independently analyse the information they acquire and that they will act autonomously in response to the results of their analyses.
This means that the operator cannot reduce to zero the possibility that AI systems may cause harm to others. This fact suggests two mutually exclusive conclusions as to the operator’s liability for negligence. It could be argued the operator cannot be held responsible because the machine is acting ‘on its own’; alternatively, it could be claimed that any and all harm that AI systems might cause is foreseeable and the operator therefore should face de facto strict liability for the results of the acts of AI systems.
The first line of argument is unconvincing. The fact that AI-systems are generally unpredictable cannot relieve their operators of liability because it is their very unpredictability that gives rise to duties of care. Likewise, if the manager of a zoo releases a tiger from its cage and the tiger kills people on the street, the zoo manager could not successfully argue that tigers are wild animals and therefore cannot be controlled. Since we have seen that robots cannot be held criminally liable, generally exempting their operators from liability would mean, in effect, that no one would be held criminally responsible for the death of a random victim of an errant driverless car. 72 Therefore, people who can foresee that their actions might harm interests protected by criminal law (such as the life and health of other persons) are obliged to refrain from those actions. 73 If the zoo manager can foresee that the tiger, if set free, will harm human beings, he must therefore refrain from releasing the tiger from its cage. The same applies to potentially dangerous products: If the producer of a car could, with appropriate diligence, know that the vehicle’s brakes are unreliable in bad weather, the producer violates the duty of care by marketing the car nevertheless.
New linchpins for potential criminal liability of the various actors involved in the chain of driving automation
In the area of driving automation, the EU has already provided a body of law that sufficiently outlines the obligations of actors involved in designing, developing, and deploying Automated Driving Systems in the common market. On the one hand, the EU has recognised a number of ex-ante and ex-post responsibilities of the manufacturer of an Automated Driving System for maintaining its safety during the lifetime of the system as outlined in Section II above. On the other hand, the role of the human operator in monitoring and overseeing the operation of automated vehicles is expected to be amplified once the Draft AI Act is adopted. New standards of due attention and due care connected to the creation or use of Automated Driving Systems could be built upon these obligations and could feed into standards eatablishing future criminal liability.
Obligations of car manufacturers
According to the EU framework elaborated in Section II, manufacturers must test their Automated Driving Systems extensively before putting them on the market. This obligation can be relevant for criminal liability: If manufacturers fail to adhere to the necessary standards, they may be criminally liable for any harm caused by the product and may be convicted of intentional (if aware of the risk) or negligent bodily injury or homicide.
After putting the automated vehicle on the market, manufacturers bear a number of obligations. Among them is the essential rule that they must closely observe and monitor Automated Driving Systems and react immediately to reports of harmful conduct. If, for example, a newly introduced automated vehicle for unknown reasons has incidents of malfunctioning, the manufacturer will have to examine possible causes. If these incidents cannot be explained by improper handling or interference by third parties, and if the problem cannot be resolved by re-programming the car, the manufacturer will have to take the car off the market. These duties can lead to criminal prosecution if a product causes harm: If the manufacturer fails to comply with the obligation to monitor its vehicles and these harm humans, it can be prosecuted for negligent or even intentional bodily injury or homicide committed by omission. 74
The reason for criminal product liability is not the unlawful creation of a risk but the mere fact that the manufacturer, in pursuing economic interests, lawfully creates a risk for the general public by releasing an AI-system whose reactions cannot be safely predicted and controlled. The unique feature of this criminal liability concept is the fact that a perfectly legal act – the marketing of an automated or fully automated vehicle in accordance with the current state of knowledge and technology – may trigger criminal liability for omissions. It may be difficult for the car industry to accept this broad ambit of liability. But victims of accidents caused by malfunctioning Automated Driving Systems would find it equally difficult to accept a situation in which, in the absence of a driver, no one is held responsible for the damage caused.
One should indeed beware of giving profit-seeking operators carte blanche for taking inappropriate risks to the life and health of other persons. Yet there may be good reasons for limiting the manufacturer’s liability with regard to Level 3 and Level 4 Automated Driving Systems. To the extent that their introduction and promotion is beneficial to society, the risk inherent in marketing a car that cannot be completely controlled needs to be set off against its benefits to society. Whereas it is true that manufacturers of Automated Driving Systems of Level 3 and Level 4 create risks to the life and health of others, one should not forget that the same is true for the manufacturing and sale of traditional, person-driven cars.
On the contrary, manufacturers who comply with the strict standards envisioned under the EU framework could be deemed to have fulfilled their duty of care, even though they (along with everyone else) know that certain risks remain. The EU legislature knows it, even as it welcomes driving automation, trusting that the use of Level 3 and Level 4 Automated Driving Systems will lead to an overall reduction of accidents and provide the elderly and disabled with equal access to the advantages of personal mobility. It would seem unfair not to limit creators’ and the users’ of such vehicles (criminal) responsibility for causing harm.
As is well known, criminal product liability raises a multitude of evidentiary issues and these will grow rather than diminish in the area of driving automation, when vehicles rely on various AI-systems and possibly function in a network of various vehicles in a model of connected driving. The European Commission has addressed some of these issues in its Draft AI Act, for instance by establishing a duty to record certain data when using AI systems. 75
Obligations of users (Human Drivers)
European law also sets out obligations for human drivers: In relation to Level 3 Automated Driving Systems, human drivers shall remain sufficiently vigilant to acknowledge transition demands and vehicle warnings, mechanical failure, or emergency vehicles. When it comes to Level 3 and Level 4 Automated Driving Systems, the Draft AI Act requires the natural persons to whom human oversight is assigned: to be enabled to understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation; to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available; to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override, or reverse the output of the high-risk AI system; to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or similar procedure.
If a human driver fails to comply with the duties that arise from using AI systems according to their design and their limits, and for instance does not respond to a takeover request when necessary and the vehicle causes harm to humans, the driver can be prosecuted for negligent or even intentional bodily injury or homicide.
Can the EU adopt harmonised rules for criminal liability connected to driving automation?
The desirability of a harmonised EU approach to criminal liability for crimes connected to Automated Driving Systems seems obvious. First, such systems have been developed and used on a cross-border basis: driving automation relies on data from different Member States; companies from various Member States provide technology and services in connection with designing, developing, and deploying such systems in the common market (as product manufacturer, provider, distributor, authorised representative, 76 etc.); the human users of these systems may reside in different Member States; cars constantly cross borders in Europe. Indeed, the cross-border nature of these systems, among other things, led the EU to adopt uniform EU rules on technical standards for Automated Driving Systems – otherwise, these systems could hardly function. 77 This necessarily implies that crimes related to Automated Driving Systems must also manifest a cross-border dimension which would make it difficult for the Member States, acting alone, to tackle them. Therefore, a common stand on criminal liability in situations when a human chooses to turn over operational control to the car is expected to add much value.
Furthermore, as discussed in greater detail in Section II, cars, including automated and fully automated vehicles, are type-approved for the EU market. The requirements that Automated Driving Systems must meet before entering the internal market, the obligations of manufacturers before and after they put them on the market, and the human-AI interaction necessary for automated driving are all detailed by common market rules with, inter alia, a view to ensuring the proper functioning of the internal market in the field of driving automation. 78 Disparities in sanction regimes with regard to violation of the relevant EU instruments and a lack of uniform implementation across Member States could undermine the internal market’s level playing field by putting operators who strictly comply with EU requirements at a disadvantage, which could, in turn, risk distorting the internal market.
Even more importantly, through the above-mentioned legal instruments the EU has allowed the placing on the market of high-risk AI systems, which may pose significant risks to the health and safety of people in the EU, without addressing who will be responsible when such systems cause damage. It would be, therefore, very welcome – if not necessary – if the legal framework for driving automation were fleshed out with rules on criminal liability. Among other things, regulating criminal liability would contribute to safeguarding legal certainty for both the actors involved in the automated-vehicle industry, on the one hand, and the victims of accidents involving Automated Driving Systems, on the other.
Even if EU-level harmonisation of rules for driving-automation-related criminal liability is desirable, such harmonisation can only occur if specific conditions are met. First, the EU can resort to criminal measures if, and only to the extent that, the Member States have conferred competence on it . 79 Secondly, it can only exercise such competence if EU action is comparatively more efficient to national regulation. 80 In the following sections, we will examine briefly whether these conditions are fulfilled with respect to crimes associated withdriving automation.
Legal Basis
According to the principle of conferral enshrined in Article 5(2) TEU, the EU may only adopt criminal law measures in relation to specific serious crimes with a cross-border dimension listed in Article 83(1) of the Treaty on the Functioning of the European Union (TFEU) and crimes affecting the implementation of EU policies pursuant to Article 83(2) TFEU. 81
Starting with Article 83(2) TFEU, the EU may establish minimum rules with regard to the definition of criminal offences and sanctions ‘if the approximation proves essential to ensure the effective implementation of a Union policy in an area which has been subject to harmonisation measures’ (emphasis added). Under Article 90 TFEU, the EU has competence to establish a common transport policy which consists, inter alia, in taking measures to improve transport safety with the objective of reducing fatalities, injuries, and material damage. 82 There can be little or no doubt that establishing minimum rules on offences connected to automated and fully automated vehicles (which belong to the broader category of road-traffic offences) can be perceived as a means to implement Union transport policy; 83 the question therefore rather boils down to (i) whether driving automation is an area which has been subject to harmonisation measures and (ii) whether criminal law measures are essential to achieve the effective implementation of Union transport policy. 84
As discussed in Section II, the EU has already established a regulatory framework for driving automation that sets out the minimum technical requirements for automated and fully automatd vehicles (such as testing procedures, cybersecurity requirements, and data-recording rules, as well as safety-performance monitoring and incident reporting for manufacturers). Further harmonisation is expected if the Draft AI Act is adopted. Although one might argue that the area of ‘driving automation’ is not sufficiently harmonised for the purpose of Article 83(2) TFEU in view of the imminent adoption of the Draft AI Act, most EU criminal law scholars agree that the ‘harmonisation requirement’ laid down in Article 83(2) TFEU cannot be interpreted to require full harmonisation as a precondition for adopting criminal law measures and that the focus should be on its second requirement: that is, whether EU action is essential to ensure the effectiveness of an EU policy – in this case, that of transport safety. 85
According to the Commission’s Communication ‘Towards an EU Criminal Policy: Ensuring the effective implementation of EU policies through criminal law’, 86 when assessing the essentiality of criminalisation, factors that should be considered include, inter alia: (a) the seriousness and character of the breach of law; (b) the need to stress strong disapproval in order to ensure deterrence; and the efficiency of the sanction system being enforced as well as the extent to which, and (c) the reasons why, existing sanctions do not achieve the desired level of enforcement. It is true that several instruments have been adopted in the field of transport safety, many of which provide for administrative sanctions. 87 Nevertheless, one cannot ignore that the EU just embarked upon establishing rules for driving automation as well as creating a dedicated regime addressing AI-systems liability. 88 On the one hand, the Vehicle General Safety Regulation as revised by Regulation 2019/2144 requires the Member States to have effective, proportionate, and dissuasive penalties for infringements by economic operators and technical services of obligations set out therein with respect to Automated Driving Systems. 89 On the other hand, with regard to civil liability, the Commission recently proposed revisions to the PLD to make sure it applies to driving automation. In contrast, the proposal on non-contractual civil liability rules on artificial intelligence seems to leave transport out of its scope. One could infer the Commission’s reluctance to address comprehensively the liability regime related to road-traffic offences. One could, therefore, argue that the EU should first implement administrative and/or civil liability measures in the context of Automated Driving Systems and if these prove to be inefficient, then consider establishing rules on criminal liability.
The fallacy of that argument, however, ignores the gravity of the harm that can arise out of certain unlawful acts, for which an administrative sanction or the imposition of a civil liability regime cannot be a sufficiently strong response. 90 For such acts, the effective implementation of a union policy would be jeopardised if the EU were to establish criminal liability only after administrative sanctions and/or civil liability were found to be an insufficient deterrent. For such cases, criminal law measures should be considered essential without the need to first prove that such ‘other measures’ are ineffective. The use of vehicles that do not comply with mandatory road safety rules poses a significant threat to public safety. According to the 2018 global status report on road safety of the World Health Organization, road-traffic injuries were the leading cause of death for children and young adults aged 5–29 years, and the eighth leading cause of death for all age groups. 91 Therefore the dangerousness of cars – some already qualify cars as dangerous instruments, as weapons to commit crimes, or even weapons of terror 92 – and the increased number of road-traffic incidents could militate in favour of establishing criminal law measures even before testing the effectiveness of administrative sanctions. 93
Subsidiarity of EU Action
Assuming that there is EU-level competence to establish criminal liability for acts or omissions related to driving automation, the second question that must be addressed is whether regulating such criminal liability is compatible with the principle of subsidiarity. The principle maintains that EU action is only justified when the proposed action cannot be sufficiently achieved by Member States but would rather by reason of scale or effects be better achieved at Union level. 94 The subsidiarity principle safeguards the optimal division of competences between the EU and its Member States by preventing the adoption of criminal measures at EU level that excessively interfere with state sovereignty. 95
In the area of transport safety, the principle of subsidiarity has been of particular importance, as illustrated in Recommendation 2000/115/EC on the maximum level of alcohol consumption. 96 Although the Commission originally proposed a directive to establish EU-wide rules on maximum blood alcohol content (BAC) levels, 97 it abandoned that effort due to subsidiarity concerns expressed by some Member States, and instead made Recommendation 2000/115/EC. A further example that demonstrates the sensitivity in this area is the concept of road-traffic offences. Although the Commission expressed its desire to explore the need for criminal law measures in the field of transport safety in its 2011 Communication ‘Towards an EU Criminal Policy’, the EU has continued to shy away from harmonising liability rules for road-traffic offences attributed to a driver’s behaviour (e.g., speeding and drunk-driving). Admittedly, traffic rules (such as speed limits, other traffic restrictions, and acceptable BAC levels) reflect certain societal efforts to balance public safety and mobility interests, which may explain why the EU is reluctant to proceed with their harmonisation, much less the harmonisation of criminal sanctions in the event of their infringement. One could also argue that the lack of harmonisation in the field of transport safety may be due to the inextricable link between road-traffic offences – irrespective of the use of AI – and classic crimes against life and physical integrity, such as homicide and other forms of bodily injury, the regulation of which generally falls within the competence of Member States. 98
Nevertheless, particularly in relation to crimes connected to driving automation, there are convincing reasons militating in favour of the EU being better placed to step in to adopt criminal measures, such as the strong cross-border dimension of crimes involving automated and fully automated vehicles and the risk of distortion to the internal market in the absence of harmonised criminal law rules. 99 Compared to cars involving no driving automation, the potential distorting effect on the internal market is much stronger in the case of Automated Driving Systems for the reasons already elaborated in Section IV.1. Moreover, unlike the driver’s behaviour (such as respecting speed limits, other traffic restrictions, and acceptable BAC levels) – which is regulated at Member States level only – driving automation is already subject to EU harmonisation, 100 and thereby raises far fewer subsidiarity concerns. Finally, establishing criminal law rules on crimes connected to driving automation which can affect the fundamental interests of life and physical integrity is in line with the normative agenda the EU seems to have pursued in recent years in order to endorse EU values it deems of high importance (e.g., hate crime and hate speech, 101 gender-based violence 102 ). 103
What lies on the road ahead?
The legal challenges resulting from cars steering themselves must be considered by lawmakers when embracing driving automation, including the option of tailoring criminal liability to the new situation.
Whether common European minimum standards or independent domestic rules will govern criminal liability is not an easy question to answer, as this raises various legal concerns ranging from whether the EU has the competence to establish such rules to their compatibility with the national legal traditions of Member States and their criminal justice and mobility needs. The political decisions to be taken are nevertheless similar everywhere. Especially with regard to a socially acceptable risk and a definition of duties of care, it seems that society must answer the question whether technological development promising benefits should be rewarded with an exemption from criminal liability for some of the risks involved – and what duty of care is imposed on those who want to employ the new technology. It seems with regard to driving automation and the use of AI overall that the EU is on its way to sharing not only the regulation on type approval 104 but also values that underpin the employment of AI in their area of freedom, security, and justice. 105
As is often the case, however, the devil might be in the detail: A one-size-fits-all approach, even when it is loosely knitted, might look too tight for some who want domestic lawmakers to provide concepts tailored to local traditions and needs. One should keep in mind not only that in the EU cars cross borders on a daily basis in huge numbers, but also that domestic legislators might find it difficult to solve all legal issues in a timely manner whenever new technology creates an accountability gap. Therefore, in particular for jurisdictional issues, definitions of duties of care and possible exceptions from criminal liability cannot be dealt with by domestic rules only.
If no adequate regulatory framework is established, users of driving automation in the EU risk a human moral crumple zone as depicted above, 106 the danger that responsibility for an action may be misattributed to a human actor who had limited control over the behavior of such a vehicle. Madleine Elish pointed out some years ago that, just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component – accidentally or intentionally – that bears the brunt of the moral and legal responsibilities when the overall system malfunctions. 107
The challenge is to strike a fair balance between the interest in promoting innovation and the dangers associated with the use of AI systems. One factor to be considered in the balancing process is the social benefit in relation to a potential for harm. 108 Establishing such rules will not be an easy exercise for the EU, as so far ‘negligence’ is considered a grey area at the European level. The path is nevertheless set, with the EU regulating the technology used for driving automation.
The EU is well advised to be cautious when regulating liability arising out of negligent conduct or omission, as one might question the compatibility of such legislation with Article 67(1) TFEU which demands ‘respect for the legal systems and traditions of the Member States’. 109 In the long run, Europe needs not only technological innovation and vision, but corresponding legal innovation and vision if it wants to master the digital age. Up to now its regulation of the field has served it well.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
is safe enough for self-driving vehicles?’ (2019) 39(2) Risk Analysis 315.
