Abstract
This article argues that through the EU's technology regulation, technological concepts permeate legal language. Such concepts may function as transplants, even irritants, causing tensions and uncertainties. As technology regulation is increasingly horizontal, i.e. obligating private and public actors alike, these newfound legal concepts remain disconnected from established public law vocabulary and the power constellations it represents and embeds. We approach this evolution of legal language from public law perspective and concentrate on the concepts of ‘user’ and ‘deployer’ in the EU's upcoming Artificial Intelligence Act. We discuss these emerging legal concepts in relation to the rich theorizing on the concepts in human–computer interaction research. Our analysis demonstrates a discrepancy between legal and technology-oriented conceptualizations of the ‘user-deployer’. We draw three conclusions. First, the digital revolution is taking place in conceptual-linguistic practices of law, and not only when translating law into code. Second, when external concepts are appropriated into law, they are uprooted from their established habitat, which may result in unpredictability in future legal interpretation. Third, in public law, adopting the ‘user-deployer’ may have some additional challenges, as it introduces a new agent into the relationship between public authority and private entities. Simultaneously, citizens seem to be mainly excluded from the legal conceptualizing, which risks blurring traditional power constellations.
Introduction: New concepts, new thinking?
European states are rapidly digitalizing public administration, guided by the EU's General Data Protection Regulation (the GDPR) and national legislations. Digitalization takes place in many forms and sizes, ranging from introducing organizational guidelines and practices to new legislation. Recent years have witnessed an upsurge in the EU's horizontal technology regulation, which strives towards a functioning digital Single Market by regulating private and public actors alike. As horizontal technology regulation aims at capturing digitalization across societal sectors and fields of law, it develops new vocabulary and adopts new concepts to do this. However, it lacks an established legal meaning.
This emerging language, which we call the legal language of automation, is the object of our attention in this article. We believe that this evolution of legal language, in which technological concepts are transformed into legal concepts, provides a dimension deserving more attention to the debates on law, technology and society. These concepts may function as transplants, even irritants, in legal doctrine, causing tensions and uncertainties. Nevertheless, the adoption of such concepts reveals how digitalization takes place also in conceptual-linguistic practices of law, and not only when translating law into code.
One of the key topics in prior research on law and technology is the legitimacy gap in technological design, as digital technologies are not subject to the same democratic control mechanisms as the use of public power. 1 Regardless of its seeming neutrality, digitalization is not just a matter of efficiency and best practices but is inevitably ideological. As argued in science and technology studies, technologies are ‘extensions of politics’ or ‘politics by other means’. 2 This is to say that it has been broadly acknowledged that neither law, technology nor the chosen vocabulary are neutral, but embody ideologies, values and rationalities that may be at odds with one another. 3 Simply put, language matters. As we hope to demonstrate through our analysis of the concepts of ‘user’ and ‘deployer’ in the EU's upcoming Artificial Intelligence Act (AIA), the choice of concepts is important – even more so when these concepts are introduced through horizontal regulation and thus have limited, if any, connection to established legal concepts in different legal fields. To draw out these frictions and to make our claim, we approach these conceptual changes by bringing together various strands of research. Our approach is socio-legal, and public law provides context for our analysis. In addition, we hope to contribute more broadly to socio-legal debates on digitalization of law and boundaries of technology regulation.
By drawing from public law theory and research on law and technology, we discuss how digital technologies – and their horizontal regulation challenges – affect the existing public law vocabulary and its core assumptions, such as the asymmetric power relations between the state and citizens. In addition to socio-legal studies, we consider it necessary to discuss the concept of ‘user’ also in relation to human–computer interaction (HCI) research. The goal is to reveal the differences between technology-oriented and legal interpretations of the concept. As we hope to steer clear of epistemic trespassing, 4 we need to be explicit that as socio-legal scholars we claim no extensive expertise of the HCI field, nor do we assume to provide any comprehensive overview of its theories. Instead, we illustrate our argument about emerging legal language of automation through purposive selection of some insights, which have gained traction within HCI and which we consider useful for a legally oriented audience. As technological-become-legal concepts lack interpretative tradition and meaning within law, we believe it is impossible to critique them solely by legal argumentation.
On the one hand, the choice of concepts affects how law in general tackles digital technologies. On the other hand, however, it also risks rendering established legal concepts of public law redundant in this task. In other words, technological-become-legal concepts are not simply a matter of economizing language, for they shape the way we understand and experience law. For example, ‘a citizen’ can become ‘an end-user of an automated system’, ‘a handling officer’ can become reimaged ‘a human-in-the-loop’. ‘The assessment’ can become ‘a prediction’, ‘the administrative procedure’, an ‘automated process’.
Some of these concepts are metaphorical. Metaphors make abstractions more understandable, and indeed, many aspects of AI and automation would be difficult to understand without them. As Marco Almada points out, metaphors and analogies shape the various discourses on AI, including AI policy and regulation: Machine learning is a black box, AI systems are products, code is law, and so on and so forth. Each of these metaphors and analogies suggests that certain features of an object or process are relevant, at the same time that it downplays other aspects. For example, when we speak of code as law, we emphasize that code constrains human behavior, disregarding or at least minimizing the various ways in which the normativity of software diverges from legal normativity.
5
By naming this emergence of the legal language of automation as a thing, we make it visible and subject to critical inquiry. What ideas do these conceptual transplants or irritants embody? 8 Are they in harmony with the rationality of public law – the fundamental script of public administration – be it digital or analogue? So far, the introduction of technological concepts in legal language remains under-theorized in socio-legal research. Instead, the focus has been on translations of law into computer code, and the difficulties that follow from that. 9
Some technological concepts have been studied before, namely in terms of what implications they bring for legal debate. They have ranged from clearly metaphorical but still conceptually structuring aspects of the Internet as ‘the information superhighway’ 10 to the perhaps less visual aspects of multinational corporations as digital ‘platforms’ 11 – a concept that can downplay both accountability and the efforts of managing digital media. For example, in the area of intellectual property, digital transition has proved to be conceptually transformative: the ‘copy’ in copyright has been shown both to expand in a legal and in an extralegal sense (suddenly it incorporated digital artifacts as well). Industry has been using copyright to reinforce the property-like claims of control. 12 This kind of conceptual transformation, we argue, is also taking place in – and even more importantly, shaping – digital public administration.
We depart from the idea that (public) law consists of language and changes through linguistic practices. Even though the relationship between public authority and citizens has always been subject to conceptual changes, 13 this process has accelerated in the wake of digitalization, and the proliferation of the EU's horizontal technology regulation. These new concepts further affect the rationalities of (public) law by making it suitable for mechanistic interpretations. However, this process is not smooth. We hypothesize that once these concepts are established and interpreted as legal concepts, they sediment into the deeper structure of the legal order but may still cause friction and surprising interpretative problems. 14
Our argument is structured as follows. In section 2, we examine some basic tenets of public law as a linguistic practice, as a context and how digitalization affects it. We also look at how the concept of user is conceived in HCI research. In section 3, we first present the AIA as a piece of legislation, and then analyse the concept of ‘user’ as it has emerged and changed during the legislative debate. Interestingly, the final stages of the trilogue negotiations between the Commission, the Council and the European Parliament in late 2023 introduced changes to the AIA's wording, substituting some mentions of ‘user’ with ‘deployer’ without changing the corresponding definitions. The final wording raises more questions than it helps to solve, and results in conceptual ambiguity that further obscures the role and responsibilities of these ‘user-deployers’ of AI systems. We then examine this new entity of ‘user-become-deployer’ against the backdrop of the established language of public administration to demonstrate the disconnect between the legal language of automation the AI Act promotes and the established languages od public law and HCI research on users. In section 4, we discuss the consequences and limitations of our findings more widely and look at how the AIA contributes to the emerging legal language of automation and shapes the digitalized borders of public law. We conclude the article in section 5 and argue that the concept of a user is a ‘legal irritant’.
Theoretical framework: Public law meets HCI
Intersections of language, public law and digitalization
Law has famously been called a ‘profession of words’. 15 It is easy to agree with this as legislation and other legal sources clearly consist of words. On the one hand, law is language in a profound way: many of the things that law refers to and regulates lack material existence, and exist only as conventions, abstractions or constructs, such as a fundamental right, a tax fraud or a building permission, to mention but a few. On the other hand, law is inherently material, as it is tied to texts, files and print media. Law traditionally resides in documents, although this media-boundedness is changing along with digitalization. Yet, it is safe to argue, law is still essentially linguistic and text-driven. 16
Law exists and creates things through language. The simultaneous essence of law as both abstract and material renders it a peculiar creative component. To use Martti Koskenniemi's example, medicine and biology as fields of science do not cause the fact that a person has a runny nose or that grain is growing in a field. When law, however, talks about a ‘state’ or ‘owning’, ‘inheritance’ or ‘stocks’, it talks about the way in which law itself makes it possible that a certain group of people in a certain area counts as a ‘state’ or a certain thing becomes an object of ‘owning’. 17 Neither would there be a legally meaningful ‘administrative procedure’ nor ‘public authority’ if law was silent about them. By naming such a procedure or such an authority, law creates them, with concomitant legal effects and implications.
Each legal field also has its own vocabulary that reflects its core rationality. In addition to law being a profession of words in general, public administration as a context brings important elements to the relationship between law and language. Unlike private law, which – as an ideal type – regulates the relations between peers, public or administrative law more specifically is about using power in an unequal way, typically with no prior conflict to be resolved. It is about vertical social ordering if you will. Therefore, it needs special legitimation structures and accountability mechanisms to be considered legitimate. This is to say that the language of public law provides a vocabulary to mitigate asymmetric power constellations. In modern society, it does so by means that are secular, rational and positivist, unlike the medieval ‘fundamental law’, the predecessor of modern public law. In it, the law governed the relations between the king and his officers. 18 Similarly, Max Weber talks about the effects of rational legal authority, which manifests as bureaucratic administration, labelled by professionalism, hierarchical structure and efficiency. 19 To some extent, these ideals are still valid today.
Although transnational and even global public law has been debated over the last few decades, 20 the main corollary of public law is the state. 21 Ideally, public law governs the population within a state for the state to execute its function in a rational and legitimate manner. Naturally, the specificities of public law differ from jurisdiction to jurisdiction. That said, some European fundamentals are, we argue, widely spread. In continental legal tradition, for example, the asymmetric power relation between public authority and citizens derives its legitimacy from its legal definition, and less obviously from its moral value. Hence, an asymmetric power relationship is a legal relationship, which is necessarily a linguistically expressed relationship. Public power is justified only as far as it is based on law, and law is followed upon its exercise. This is widely known as the rule of law or the principle of legality.
Today, public administration is becoming increasingly digital, and administrative law must function in a new environment. This complicates the way in which the rule of law can operate. 22 To date, law does not properly acknowledge this. To quote Sheila Jasanoff, ‘The dominant discourses of economics, sociology and political science lack vocabularies to make sense of the untidy, uneven processes through which the production of science and technology becomes entangled with social norms and hierarchies’ (Jasanoff, 2004). 23 This applies also to law. In automated administration, the relationship between public authority and an individual is mediated, if not constituted, through digital interfaces, and the information systems that enable them.
This is foreign to the traditional legitimation structure of public power. In consequence, it must adopt another layer: the relationship between public authority and a private entity is now co-constituted by what is allowed in terms of law (the rule of law) and doable in terms of technology (the rule of code, if you will). These two may overlap, but they are hardly the same thing. 24 Thus, in this process of translating administrative law into technological solutions, 25 the structural power asymmetry between public authority and private entities becomes obscure – or untidy and uneven, as Jasanoff puts it – as the legal and technological horizons coalesce and entangle. In other words, the existing public law vocabulary does not sufficiently capture the effects of digital technologies. In effect, public law becomes reimagined through horizontal technology regulation, which then is applicable in public administration in addition to traditional public law rules and principles.
On one hand, this legitimation question is a matter of democracy or the lack thereof. As Lawrence Lessig has famously put it, code can be considered a form of regulation itself (‘Code is law’). 26 If law is seen as a code, it nevertheless lacks democratic accountability, as we mentioned earlier. 27 The entire idea of law creating legitimacy in the use of power stems from the assumption that law is put in place in a democratic way and therefore can be accepted. In this way, fair process legitimizes the outcome, namely the law. If, however, law is seen as a code and not an outcome of political contestation, democratic legitimation translates into professional or managerial practicality. Is managerial practicality enough? So far, this question remains unanswered, although only one possible answer can be imagined: code, too, needs to be democratically legitimated.
On the other hand, this legitimation question is a matter of language. As we argue in this article, technological concepts start to replace or complement legal terms. Importantly, these shifts are not, however, innocent or haphazard, as they reflect administrative ideologies (such as New Public Management, New Governance or Digital Era Governance), 28 and shape the assumed characteristics and roles of both citizens and public authorities. Currently, as Kennedy argues, the ideology of algorithmic governance may be emerging. It marries contemporary governance ideals (labelled by coordination, collaboration, networking) with e-government (the use of ICT as a fundamental enabler) and e-governance (combining the role of the state and private sector actors and interests). 29 These ideologies signal how the current and future administration is imagined through new technology regulation, and how the use of public authority as well as the role of citizens is construed in it. 30
However, automated administration is not only a matter of the most recent governance ideologies. As mentioned, the very modern meta-narrative of public law itself is a secular, rational and positivist project; the new ideological shifts could be seen as footnotes to it. Seen broadly enough, we can argue that this meta-narrative is in concord with the idea of automation and decreasing human involvement in law. From that perspective, public administration provides fruitful soil for adopting technological concepts and rationalities which emphasize efficiency, goal-orientation and following certain, predetermined steps. As Appel and Coglianese argue, governmental use of technology, even machine-learning algorithms [in the US] can be readily accommodated by current administrative law doctrines. 31 Largely, technology and current public administration are ideologically analogous and parts of the same story of modernization.
Thus, public law may welcome the underlying rationality of automation. From a more practical perspective, however, the change in terminology of legislation may be confusing. On a linguistic level, new technological concepts are transplants, newcomers in legal vocabulary. Therefore, it is not always clear what legal implications they entail, if any. What kind of baggage accompanies the adoption of these concepts to law? Does their history travel with them or does law invent them anew? As technology nonetheless needs regulation, automation necessitates a legal manifestation that consists of language and captures the phenomenon to be regulated as accurately as possible, without doctrinal disruption. Therefore, law must also absorb the rationality of automation and translate it into the language and rationality of law, unless, as presented, code is seen as law itself.
When technological concepts become legal concepts, their primary point of reference inevitably changes. Legal concepts – be they technological transplants or not – make a part of the larger fabric of law. In that process, technological concepts adopt a dual identity as parts of both the system of technology and the system of law. As such, this is not unique; the language of automation is not the only ‘foreign’ rationality that law needs to deal with. For example, the language of medicine, religion, markets, or agriculture must also adopt legal manifestation when subject to legal regulation. The question is thus not only about the choice of vocabulary but the ways of thinking that underpins that vocabulary. 32
We argue, nonetheless, that the language of automation and concomitant technological concepts have special characteristics when it comes to law. Regardless of whether text-driven law can be computable, this potential computability seems to be entering into law through the backdoor, so to speak, by adopting a suitable vocabulary. In Norway and Denmark, for example, an entire debate on ‘automation friendly legislation’ exists. It calls for legal language in which ambiguity is diminished to make it is easy for automation. 33 This vocabulary would cut across all rationalities that would enter law. In the process of digitalization, the language of automation is changing the identity of law as a linguistic practice. In other words, it seems that law is not only subject to its own terms but is also becoming subject to technological language and rationality.
The many faces of ‘user’ in human–computer interaction
Before proceeding to the legal understanding of the ‘user’ and ‘deployer’ in the AIA, we briefly describe how users are approached outside (public) law. As stated in the introduction, this description will not do full justice to the rich theorizing on the concept across various research fields such as HCI and science and technology studies nor the practices of user-centric design. 34 Instead, the value of understanding these non-legal meanings of ‘user’ lies in grounding our expectations towards the legal concept.
In popular parlance, ‘user’ does not invoke many associations. Its meaning is neutral (Internet user), if not negative (drug user) in some contexts. Traditionally, ‘user’ is a term that has little to do with public law. It is not customary – though not impossible – to think that public authority would use law or some other thing to exercise its vested powers, or that a private entity would be a user of services. This traditional disconnection might be changing, though. For example, in the EU, ‘user friendliness’ is recently considered an aspect of good administration. This implies that there is a person, a customer of administration, who uses digital public services, and therefore, these services should be friendly to them. Simply put, the ‘user’ is the ‘customer’ who uses digital tools to contact the public administration. 35
Like many other technological or software concepts, the concept of user did not appear ex nihilo, and it has a more nuanced meaning outside law. A key discipline in this is human–computer interaction (HCI). This can be described as an interdisciplinary research field at the intersections of computer science, design studies, psychology and social sciences. Concepts of interaction between humans and digital systems and user interfaces are at its core. The entire idea of a user interface is necessarily connected to the idea of a ‘user’. As we discuss in the following, this connection between user and interface is also present in the AIA's provisions on human oversight of high-risk AI systems. By highlighting the origins of technical-become-legal concepts, we hope to make explicit the historical baggage and content they may bring to – and which may not translate into – legal language.
Although the roots of HCI go back to human factors research in the 1960s and software psychology in the 1970s, since the 1980s the field has become particularly influential with its user-centred approach and ‘empirical orientation towards system and software design’. 36 Some HCI scholars perceive that ‘the term of interaction is field-defining, yet surprisingly confused’. 37 How do users interact with computers? Much of HCI research deals with developing user interface technologies. The idea is to design them to be pleasant and effective for the interaction between human users and computer systems.
From the very inception of this scholarship, it has been broadly acknowledged that designers of technical objects define their users according to specific tastes, needs, competences, motives and so on, but also ‘inscribe’ their predictions and visions in a technical form. 38 However, this oversimplification has been challenged later on, as research has shown that users are not only passive receivers of technologies developed by designers. Instead, they actively shape and redesign technologies through use, non-use and workarounds. 39
As a scientific discipline, HCI research emerged along with the rise of personal computing in the late 1970s, which turned non-technical people into computer systems users. This resulted in a growing acknowledgement of the systems’ poor usability and the practical need for HCI. 40 The emergence of the field coincided with the research programme of cognitive science, which brought influences, concepts and visions from social psychology and led to cognitive engineering models to support developers applying these concepts. Early human factors research had also developed empirical and task-analytic techniques particularly in aviation, which highlighted a task-oriented understanding of users’ needs for computer interaction. These influences led to a cognitive approach towards human–computer interaction, in which user's tasks and plans were of central importance.
Lucy Suchman's book Plans and Situated Actions is a seminal work on the intersections of technical HCI and the social sciences. 41 In her book, she challenges the dominant cognitive approaches to AI and interface design. These approaches understood the design of interactive systems as a process of formalizing user performance as sequences of tasks. Cognitive approaches understood human interaction with computers to consist of tasks that consecutively taken will lead to the execution of the user's plan. Suchman, in contrast, built on ethnomethodology. She argued that users are situated agents embedded in social settings. Instead of following some formalized steps, users continuously improvised and reacted to changing circumstances. 42 Suchman's critique contributed to the growing recognition of the importance of contextual field studies to find out the needs and actions of users.
In HCI research, much attention has been paid to the questions of who the users are and how their opinions should influence the design of interactive systems. There are various orientations within HCI that focus on users’ needs, perceptions and experiences. For example, technically oriented usability studies aim to measure and analyse the system's efficiency, effectiveness and user satisfaction. The emphasis has been in supporting users in achieving their goals. The more recent orientation of user experience, however, is directed towards the emotions and experiences of users. 43
In their 1995 book chapter, Geoff Cooper and John Bowers discuss the development of disciplinary rhetoric in HCI regarding the concept of the user. 44 They demonstrate how HCI research has conceptualized the user over time: first as the operator in early texts on ergonomics, then as the rational processor of information drawn from cognitive psychology, and followed by the user in the political sense of representation, which is linked with the legitimacy of design.
In HCI, the user is pictured as fundamentally separate from the designer. Cooper and Bowers argue convincingly that the need to justify HCI as a distinct scientific field has given rise to the narrative of users as angry, frustrated and frightened, ‘as a fragile beast under threat from technology and a duty for HCI researchers to help rescue them’. In the same edited volume, technologist Philip E Agre gives an account of two HCI conceptualizations of the user, the technical one supported by programmers, who focus on the formalization of human activity and maintain a distance from the users, and the managerial one, which is closely involved with the organizational settings of users. 45 In sum, the narrative of the user remains politically and ideologically loaded, contested, ambiguous and subject to change over time.
As mentioned earlier, users are closely connected to the concept of a user interface. In his book on embodied interaction, Paul Dourish describes user interfaces as material manifestations of interaction that relate to the embodied nature of human experience. 46 Drawing from the phenomenological tradition, Dourish examines the relationship between action and meaning. He argues that meaning created by the users of interactive systems translates into action. This meaning does not result from the actions of designers but by the users’ being-in-the-world and the social situatedness of interpersonal communication, which technology mediates. 47
The design implications of Dourish's argumentation is that user interface design should use the embodiment of interaction as the organizing principle when designing interactive systems. 48 He challenges the traditional approach to interactive system design, which perceives the designer as the one managing the interaction between the user and the artifact, and calls for recognition of the active role users play in determining how well the designed systems meet the users’ needs.
Through this brief presentation of HCI literature and the way in which it has conceptualized the ‘user’ it becomes sufficiently clear that no single ‘user’ exists. Instead, as is characteristic of disciplinary debates, the concept has undergone various dominant paradigms and has been seen in various ways: the user has been depicted as rational, as irrational, as creative and improvising, as angry and fragile, as embodied materiality. All in all, it can be argued to reflect a changing image of natural persons who are using computer software with different sets of skills and dedication. In other words, in HCI, the emphasis has always been that the user is a human. As we can see in the following, this core idea is broken when the concept of a user is adopted and transformed into ‘deployer’ in the AI Act.
The concept of a ‘user’ in the EU's AI Act
The development of AI regulation in the EU, and the effects of the horizontal approach
From electoral promise to product safety thinking
Before going into an analysis of the ‘user’ in the AIA, a short presentation of the AIA in general is in order. The regulation of AI has been one of the EU's political priorities. The President of the current Commission (2019–2024), Ursula von der Leyen, stated in her political guidelines that, ‘In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.’ 49 A hundred days proved to be a tall order, and the final compromise on the Act between the EU Commission, Council and Parliament was negotiated in the trilogues in December 2023. 50 At the time of finalizing this article in February 2024, the final agreement of the Act is still pending the Parliament's approval. As the AIA is a moving target for scholarship, we provide here only a summary of the main regulatory architecture and conceptual choices that are not likely to be drastically changed in the last stages of the legislative process.
The AIA is a complex piece of legislation. Its main idea is to approach different AI systems as products whose safety must be ensured. Primarily, this is done by assessing the risks that may arise from the use of such systems. The Commission proposed new legislation for AI systems based on the Article 114 of the Treaty on the Functioning of the European Union (TFEU). The Article grants legislative competence for harmonizing rules in the Member States for the functioning of the European Single Market. The Commission's proposal for the AIA explicates the connection with product safety and AI systems. The so-called New Legislative Framework already regulates AI systems as safety components in products (e.g. machinery, toys, medical devices). AI as safety components must thus comply with these existing compliance and enforcement mechanisms, whereas the AIA establishes a similar model of ex-ante internal checks and ex-post certification for stand-alone high-risk AI systems. 51
Despite the AIA being a moving target, there are already several analyses of the AIA, though they do discuss the dynamics of legal and technological languages. For example, the proposal is argued to have an impact on EU and national labour law systems, and Cefaliello and Kullman therefore identify ways to refine the AIA, insofar as it impacts work. 52 De Matos Pinot, on the other hand, discusses how the draft AIA presented by the Commission in April 2021 has strengthened the Parliament's long-standing call for a direct right of legislative initiative. 53 Others point to issues that seem incompletely regulated, providing doubts about the exact scope and content of the legal solutions outlined in the draft, and possible normative conflict with other regulations. 54 Veale and Zuiderveen Borgesius provide an overview of technologically oriented regulations (the Digital Services Act, the Data Governance Act, etc.) that are beyond the scope of our scrutiny but strengthen the main argument of our article relating to the need to understand the interaction between legal and technological language better. 55
In many respects, the AIA follows the logic of the EU's General Data Protection Regulation (GDPR). Like the GDPR, the AIA adopts a risk-based horizontal approach instead of sector-specific rules and precaution towards new technologies. 56 This is understandable, as the GDPR has become the milestone for European data and technology regulation and automated decision-making. Why not use this milestone as a benchmark for new legislation? In addition to its implications for the European understanding of data protection as a fundamental right, the GDPR's impact extends even wider than the EU. 57 This is due to the ‘Brussels effect’ coined by Anu Bradford, a form of de facto regulatory globalization through market mechanisms in which companies in non-EU countries begin to comply with EU legislation to gain access to desirable internal markets. 58
Such compliance is not necessarily limited to EU-targeted products but adopted as uniform standards. The GDPR is regarded as the poster child of the Brussels effect as, at times, such de facto compliance may also translate into de jure compliance, as was the case with the Californian Consumer Privacy Act, which was heavily influenced by the GDPR. 59 After the GDPR's impact beyond European borders, it is understandably attractive for the EU legislator to frame the EU as the gold standard of the global technology race and the AIA as the central means for ‘building trust in human-centric AI’. 60
The AIA categorizes AI systems based on the level of risk they impose on safety and fundamental rights. AI systems that impose unacceptable risk are prohibited. High-risk systems, in turn, need to comply with internal control and third-party CE certification as well as other obligations set out by the Act. Prohibited AI practices include systems that deploy subliminal techniques beyond a person's consciousness, exploit vulnerabilities of a specific group of persons due to age, disability or a specific social or economic situation, social scoring and certain forms of remote biometric identification in public spaces. AI systems that impose limited risk are subject to certain transparency obligations, whereas the Commission encourages voluntary codes of conducts for AI systems with minimal or no risk.
The obscurity of public authority in the AIA
Due to the horizontal approach, the AIA does not recognize public administration or use of public power as a sensitive context for AI adoption in general. Instead, it recognizes parts of public administration as high-risk areas, including access to and enjoyment of essential public services and benefits (Annex III). This choice adopted in the AIA can be argued to represent a larger trend, in which the traditional boundaries between public and private sphere are becoming blurred. 61 In consequence, public law could be understood not only as a set of binding rules, but also as a mindset that characterizes other unequal power relations, such as the relation between an individual and a multinational company. 62
That said, public administration has not traditionally been looked at through a risk-assessment lens. The rule of law, or the principle of legality, stems from the idea that all use of public power must have a basis in law and law must also be followed in using it. The risks are always implied, and therefore they are mitigated through legal means: through legality and legal principles ex ante, and redress and accountability mechanisms ex post. First through the GDPR and now through the AIA, the identity of public law is dismantled into sector-specific risk areas. We can argue that something vital is lost about context-sensitivity, as the AIA loses sight of the unequal power dynamic between the state and citizens and the established safeguards of administrative law. In the AIA, they are replaced by more software-oriented safeguards.
Nevertheless, the AIA has become a prominent example of the translations and negotiations over language that take place between law and technology. In the debate over AI regulation, much attention has been paid to the AI definition dilemma in which the clear application scope of the new regulation requires a clear-cut definition to provide legal predictability and certainty. However, the AI research community has been vocal about the impossibility of any such definition due to the field's diversity of approaches, continuous development and vague disciplinary boundaries. 63 Should we even talk about AI as the object of regulation, or would some other term be more suitable?
This is to say that the definition of AI has become politicized, but what is left invisible and implicit is the broader adoption of technological language within legal language. By focusing on an admittedly important definition of AI, other adopted terminologies have been marginalized. The inclusion and institutionalization of these other concepts into legal language fundamentally shapes the logic of regulation and the ways in which the relationship between law, public authority and technology is constructed. As these concepts lack established legal interpretation, they also produce new interpretative flexibility – and hence potentially also contribute to law's open texture. In Hildebrandt's words, multi-interpretability is what ‘makes’ modern law. 64 It is, however, a matter of law how software concepts are interpreted as legal concepts in the future.
The concepts of user, deployer and human oversight in the AIA
The definition of AI is not the only tension between legal and technological rationalities that the regulatory debate around AIA reflects. To draw attention to the less explicit discursive tensions in the AIA, our original analysis conducted in spring 2023 focused on the concept of ‘user’. Although in the following we focus only on ‘the user’, it is not the only technologically oriented concept in the AIA. Also, terms such as ‘interface’ and ‘interaction’, ‘design’, ‘software’ and ‘hardware’ are assumed to be known, based on standard language, as most of these concepts are not included in the definitions section of the AIA. As mentioned, these concepts have not been similarly challenged as the definition of AI but are implicitly bringing the rationalities of technological language into that of law.
We analysed the ‘user’ in the AIA draft negotiated during the Czech presidency in autumn 2022 (COREPER file 3 Nov 2022). We have then followed up the legislative debate and contrasted our earlier findings with analysis of the latest draft agreement from early 2024 (COREPER file 21.1.2024). Throughout the legislative process there has been only minor changes to the actual content of the emerging legal concept, although the final wording from January 2024 substituted ‘user’ with ‘deployer’ on many occasions.
In the following, we describe this emerging technical-become-legal concept as ‘user-deployer’ to highlight the terminological shift and various sources of conceptual ambiguity. We have examined the role of this ‘user-deployer’ in two ways: first, by looking at the terminology and definitions the Act provides, and second, by analysing the draft Article 14 on human oversight, which provides the central user obligation. Even as the term of ‘user’ has in some places changed into ‘deployer’ during the final stages of the trilogue negotiations, the fundamental assumptions behind the overall approach have remained the same. The ‘user-deployer's’ obligation to provide human oversight for high-risk AI systems has been the central to the EU's AI regulation since the Commission's proposal in 2021.
The ‘user’ seems to be a translational concept between law and technology, shaping the regulatory thinking behind the AIA. How does the AIA construe ‘users’ and ‘deployers’, their expectations and responsibilities, rights and needs? Why was the concept of ‘user’ replaced with ‘deployer’, a change of terminology important enough to be made but of little consequence to the concept's definition? Did it clarify the relationship between deployers and the actual end-users of high-risk AI systems, the citizens?
In the 2022 compromise text, ‘user’ was defined in Article 3 as ‘any natural or legal person, including a public authority, agency or other body, under whose authority the [AI] system is used’ (Art 3 (4)), also an ‘operator’ may be a user (Article 3(8)). Yet, according to Article 2 on scope, ‘the regulation does not apply to obligations of users who are natural persons using AI systems in the course of a purely personal non-professional activity’. Thus, by limiting the scope, the AIA oriented the concept of ‘user’, and thus consequently of ‘deployer’, towards a legal person or other organization that exerts control over the system. A divide emerges between the users of an AI system and those affected by them, who may be in vulnerable positions in relation to the users.
The final wording of January 2024 follows closely to the earlier definition, as the ‘deployer’ is defined as ‘any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.’ Alongside deployers, users do make an appearance in the final version of the AIA. For example, Article 52(1) titled ‘transparency obligations for providers and users of certain AI systems and GPAI models’ imposes responsibilities for providers to ‘ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that the concerned natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use…’
The AIA constructs the ‘user-deployer’ as the one responsible for the AI system by creating obligations. The most focal of these obligations is to ensure human oversight. Human oversight, provided for in Article 14, is one of the mandatory requirements for AI systems placed on the market or used in high-risk sectors, which include, e.g. law enforcement, essential public and private services, migration and administration of justice (Annex III). Deployers of high-risk AI systems are obligated to assign human oversight to natural persons who have the necessary competence, training, and authority (Article 29(1a)). These deployers will also receive ‘instructions for use’ from providers of high-risk AI systems, defined as ‘the information provided by the provider to inform the user of in particular an AI system's intended purpose and proper use’ in Article 3(15) [emphasis added]. There is a similar provision for providers of general purpose AI with regard to ‘downstream providers that integrate the model into their AI system’, that they should be followed by an accompanying ‘instructions of use’, per Annex IXb.
In our reading, human oversight is the main obligation that significantly creates expectations and assumptions about what the user is supposed to be able to do. This is to say that the concepts of ‘user’, ‘deployer’ and ‘human oversight’ are closely connected but they should not be regarded as synonymous. Some users, such as civil servants in high-risk areas of public administration, may be obligated with the responsibility of human oversight, whereas other users, such as citizens, are not. Therefore, the ‘user’ and ‘human oversight’ overlap only partially.
Human oversight nevertheless reveals much about how the ‘user-deployer’ is construed in the AIA. The concept is geared towards legal personhood, as an organization which deploys and uses an AI system. The role of human overseer, in turn, is reserved for natural persons. 65 This means that a ‘user-deployer’ itself is quite underdeveloped in its own capacities. It needs another agent to reach its full potential: that of a human overseer. Human oversight, provided for in Article 14, is also the section in which the concept of ‘interface’ emerges as the prerequisite for effective control: ‘high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use’ [emphasis added].
Interestingly, the measures given in Article 14 refer to the interface tools necessary for enabling human oversight by a natural person. These measures, which refer to human–machine interface tools, can be identified and built either by the provider before the system is placed on the market and/or implemented by the user. These measures demonstrate the capabilities attributed to the human overseer. According to Article 14(4), the human overseer is expected to be able to:
to properly understand the capacities and limitations of a high-risk AI system and be able to duly monitor its operation, also in view of detecting and addressing anomalies, dysfunctions, and unexpected performance; to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; to correctly interpret the high-risk system's output considering, for example, the interpretation tools and methods available; to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system; to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or similar procedure that allows the system to come to halt in safe state.
Therefore, the AIA puts forward an interpretation of a ‘deployer’, which originated from the concept of ‘user’ and refers primarily to organizations obligated to ensure ‘human oversight’ performed by natural persons through interface tools.
To conclude, the AIA reserves human oversight for natural persons who are expected to have capabilities to interpret the AI system, its capacities, limitations and outputs, and intervene or override the output. In a way, although users are connected to natural persons, they are not precisely the same thing, either. To ensure human oversight, the system's interface needs to be built to enable these tasks. However, these tasks are not user capabilities as such, but are instead capabilities of human oversight.
Powerless users, superhuman overseers?
As we have discussed above, the questions of user capabilities, user experiences and empowerment have been fundamental for HCI research, a field that strives to develop interfaces that are pleasant and effective to use. At the same time, this research has given its flavour to the historical and scholarly baggage associated with these concepts of users and interfaces. Put simply, although being subject to scholarly debates, the state-of-the-art understanding of a ‘user’ in HCI is a human operator of a computer system, or a situated and embodied agent who improvises around the limitations of technologies.
One is left wondering to what extent the emerging legal concepts of the AIA, such as the ‘user’, ‘deployer’ or the ‘interface’, are isolated from the scholarly debates and the rich theorizing in HCI and STS research. Through detachment and disconnection from the technologically oriented meanings of ‘user’ described in section 2, legally oriented meanings of ‘user-become-deployer’ are uprooted from their scholarly habitat and translated into semantically empty legal constructs. Instead of acknowledging the role users play in shaping technology discussed above, the legal ‘user’ of the AIA is limited to the deployment and use of an AI system and ensuring the ensuing obligations and responsibilities. Against this observation, the partial substitution of ‘user’ with ‘deployer’ makes sense.
This conceptual appropriation, if you will, mirrors the creative powers of law. Law makes its own object, even if it means using a concept that has a more established meaning elsewhere. It resembles Humpty Dumpty's attitude in Lewis Carrol's Through the Looking Glass: ‘“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean – neither more nor less.”’ The legally appropriated concepts cannot, however, be too far in meaning from their conventional or scholarly use. For example, even if ‘good administration’ has an established meaning in law, it nonetheless is open to other interpretations of goodness. 66 Similarly, the idea of ‘seeing through’ necessarily travels with the concept of transparency to law. 67 Therefore, the legal concept of a ‘user’ cannot be too far from the conventional meaning of using. Otherwise, the popular understandability of law is arguably in jeopardy.
Partially, the adoption of the concept of a user in the AIA may be a matter of legal imitation. As De Hert argues, ‘Read under the GDPR lens, the Parliament's recommendation [on the AI Act] essentially replicates the GDPR scheme (and even closely follows its structure). A new set of actors is introduced (“user”, “developer”, “deployer”), resembling GDPR's data subject, controller, and processor, respectively in Article 4.’ 68 The origins of the ‘deployer’ in the AIA might thus be a matter of finding a functional replacement for the GDPR concepts, regardless of their scholarly meaning, and ‘user’ and ‘deployer’ along with other similar concepts work well enough for that purpose. As explained earlier, this is due to the GDPR's role as a benchmark for future technology regulation in the EU.
Given the powers assigned to the human overseer, perhaps it would be the correct point of comparison to the user in HCI literature? This does, however, not seem to be the case. Although the AIA's human overseer seems to be the natural person operating the AI system, this concept does not seem to entail the many connotations of HCI's user. This is to say that even a human overseer as an auxiliary component of the user is not subject to such humanness that is argued correctly in HCI literature. Instead, the human overseer is subject to almost superhuman powers, as explained above.
Prior scholarship has been sceptical about ‘the human pretensions of control over technological systems’. 69 The EU's chosen emphasis on human oversight as a governing principle in technology regulation may prove to be problematic. Instead of conceptualizing digitalization in terms of situated interaction between various human actors and technological systems, it ends up reinforcing and reproducing the dichotomy between humans and machines. Furthermore, it mystifies and exaggerates human capabilities of meaningful control while further obscuring the other duties and roles humans have in public organizations. 70
From the viewpoint of law as language, however, the adoption of the term ‘user’ is understandable: law creates its own system of meaning – the EU law with additional characteristics – even if it is a distorting mirror image of the richer meaning established outside law. With the expense of creating parallel systems of meaning (e.g. ‘user’ in AIA, ‘user’ in HCI, ‘interface’ in AIA, ‘interface’ in HCI, ‘human oversight’ in AIA, ‘human oversight’ in HCI), law adopts foreign terminology only on its own terms to stay relevant. At the same time, legislators should be cautious of assigning such powers and responsibilities to the human overseer that scholarship has rendered fallacious.
HCI research, furthermore, establishes users’ engagement with interactive systems as always being embedded in social contexts. This suggests that the loss of context in the AIA's horizontal approach may turn out to be problematic, not only from the perspective of power dynamics and safeguards in public administration, but also from the perspective of user interface design. At the same time, the AIA institutionalizes the human–computer interface as the site in which the user obligation of enabling human oversight by natural persons takes place. The needs, expectations and experiences of human overseers remain, however, marginalized. Their role is defined through the expectation of exerting control over an AI system, yet they are removed from its operations.
What do these conceptualizations and HCI rhetoric tell us in relation to the AIA's concepts of a user, a deployer and a human overseer? Surprisingly little. The AIA's user or deployer is not a user in the sense of HCI research, but neither is the human overseer. These newly established legal concepts do not reflect the nuanced relationship between the technical and the social but instead seem to create a fundamentally different meaning. The legal conceptualization of a ‘user-deployer’ is distinct from any other. As such, it is prone to additional challenges when designers are expected to comply with the associated legal obligations.
Discussion: The new borders of public law?
Above, we hope to have demonstrated a tension between horizontal technology regulation – which by default omits context-specificity and ignores the traditional private/public divide of legal scholarship and practice – and public law theory, which builds on the sensitivity to the asymmetric power relations between public authorities and citizens. The EU's horizontal technology regulation in general and the AIA in particular creates and reinforces the emerging legal language of automation. It is noteworthy that this language is not specific to any established legal field: there is no separate private law or public law language of automation but instead only generalist legal language. This is why we have considered it important to examine how this generalist language relates to public law.
Our analysis shows that, as a concept, the ‘user-deployer’ carries connotations that downplay active agency. In the context of public administration, the ‘deployer’ as a legal person obscures the power imbalances between public authority and citizens. These ‘deployers’ are juxtaposed with those persons affected by the system, who are in vulnerable positions. Yet it seems that both ‘user-deployers’ and the affected persons are thus somehow subject to the power of AI systems and their providers. This is conversely portrayed by the optimistic if not unrealistic powers and capabilities assigned to human oversight.
The European Commission considers potential harms to such affected persons in its decisions to amend the list of high-risk AI systems. Although the AIA recognizes that such power imbalances may exist, this stance is far from the basic tenets of administrative law, which recognizes this power dynamic as structural and constitutive. When opting for ‘deployer’ instead of ‘public authority’ and ‘those affected persons’ instead of ‘citizens’, much historical backdrop, legal status and interpretative doctrine evaporates. As Ida Koivisto has argued, public law seems to be ‘out’ as bureaucratic structures, but ‘in’ as post-ideological vocabulary on procedural justice. 71 Public law is gaining ground as a mindset in transnational regulatory contexts but losing significance as a constitutional structure.
However, public law principles (such as transparency and reason-giving) are now partially complemented and sometimes even replaced by technological principles, such as the presented and analysed human oversight, or the proposed principles of accuracy, robustness and cybersecurity (Article 15 of the AIA). Although modern administration might be in harmony with digitalization on an ideological level, it still relies on the idea of a human agent – so much so that its meaning has remained implicit. In other words, there has not been a need to make human agents clear in public law doctrines as no viable alternative for them has existed. Now, however, as we have shown, through the concept of a ‘user-deployer’ and its auxiliary human agent, the overseer, the question of humanity has also permeated obliquely into public law through the proposed AIA.
The limits of the established vocabulary become visible through human oversight. As human oversight becomes the main obligation of the ‘user-deployer’, that is the public authority, and the human overseer is the civil servant. In effect, the other side of the public law relationship, the citizen, is in jeopardy of becoming excluded. If citizens are not users in the sense of AIA, this affects the design and deployment of AI systems in public administration. If the law recognizes only certain users, such as the those performing human oversight, it is likely that this will further impact on design choices: for whom human–computer interface tools are designed, whose user experience counts and from whose perspective usability is defined.
It is important to notice, however, that the new concepts in the AIA – perhaps the legal language of automation more generally – do not replace more established terms without any residue of meaning. Instead, through their introduction, the entire power constellation is thought anew. Public administration can be a user, but it does not have to be. A private company can be a user, but it does not have to be. Human overseers can be a civil servant, but they do not have to be. The user is not committed to any of these traditional divisions but is functional by nature.
The AIA sets a new grid of meaning and concomitant legal effects on traditional borderlines within law and legal thinking. The ‘publicness’ of power is not decisive but the de facto asymmetry of it is. 72 Thus, we can notice that both the horizontal approach of the AIA as well as the vocabulary adopted in it further blurs the borderline between private and public law, and the users and objects of power. As presented, the GDPR has paved the way for this kind of legislative structure. This is largely a matter of the EU's legislative powers and how they are used to create a functional (digital) single market. However, these legislative structures cannot but affect the power used in the digitalized society.
Although the legal language of automation is legal language by definition, it is not a superficial shift concerning how we talk about things. On the contrary, it reflects, solidifies and reproduces the ways in which we imagine the role of technology, and AI more specifically, as an object of regulation. Even if law accepts this new terminology only on its own terms, as has been argued, this new language impacts on those very terms by looking at the world through new lenses, e.g. through the concepts of a ‘user’, ‘human oversight’ or a ‘high risk’. As the gate is now opened, so to speak, law is perceptible to adopt more technical terms as the legal and political debate is focused on the AI definition dilemma. Law is thus coming closer to technology, not only structurally (law is code) but also conceptually (legal concepts are also technological concepts).
Conclusions: The ‘user-deployer’ as irritant
Throughout this article, we have argued that technological concepts are infiltrating into law, which we demonstrated through the vocabulary adopted in the AIA. We have described this as the emergence of the legal language of automation. We draw three interconnected conclusions.
First, the digital revolution of law takes place also in the conceptual-linguistic practices of law, not only when law is translated into code. Thus far, the triad law, language and technology have been studied from several perspectives, e.g. the differences between legal and technological normativity; the challenges of computational modelling of law; implementation of legal values by design; and the temptation to draft laws in binary ways to enable future automation. We have argued, however, that there is even more to this relationship: the introduction of technological concepts that originate from software development or HCI research normalizes them in law, yet detaches them from their contexts and earlier meanings. The appropriation of these concepts demonstrates that digitalization not only changes law structurally but also as a linguistic practice.
Second, law (the AIA) has made the concept of a ‘user-deployer’ void of scholarly (HCI) meanings. This has supported our hypothesis that when law appropriates technological concepts and translates them into legal concepts, this comes with a price. Through this appropriation, the concept of user has been made available for legal interpretation. On the one hand, this is how law operates, by adopting concepts at face value from other fields and giving them a new meaning. On the other hand, this also makes these concepts unpredictable: if they are not supported by research (e.g. the human overseer possessing superhuman powers), it is difficult to foresee where interpretative support could be found when no history as a legal concept or connections to other legal concepts exist. Will the legal doctrine push back and reject these concepts and their pluralities of meaning?
Third, in public law, the concept of a ‘user-deployer’ restructures and potentially obscures the relationship between the public authority and the private entity. As we have argued, the core of public law is using unilateral power in a way that can be democratically legitimized (the rule of law) and easily comprehended. The introduction of EU horizontal rules and concomitant agents makes this more complicated. The question is not only about replacing an old concept with a new one (e.g. a customer of the administration replacing a subject of the administration), leaving the structure of the relationship intact, but introducing an entirely new agent into the mix. The ‘user-deployer’ has no predecessor in structuring the relationship between the public authority and a private entity, yet it complicates the understanding of who is using power over whom and how. That is to say, the concept of a user obscures the composition of this relationship by adding new duties and new accountability mechanisms and blurs the independent role of a civil servant.
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Svenska Litteratursällskapet i Finland and by the Research Council of Finland, Scientific Council for Social Sciences and Humanities (grant numbers 999650, 341434).
