Abstract
This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.
The latest wave of artificial intelligence (AI) applications 1 has brought with them concerns around potential and actual risks in the form of discrimination, amplification of bias, privacy violations, manipulation of end users, disinformation, and unaccountable decision-making. As the sheer number of AI ethics guidelines, AI ethics think-tanks, in-house corporate ethics boards, and legislative proposals since the mid-2010s makes clear, regulating AI to limit or eliminate its negative impact is considered among the pressing issues of our times. What is more, the data collection and data processing methods subsumed under the term AI threaten to exacerbate other serious social, political, economic, and legal problems—content moderation on social media, environmental degradation, and structural racism and sexism, to name a few.
It is no wonder that human rights is invoked as a legal, moral, and political framework to address AI's negative impact, compensate victims, and hold offenders accountable. This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework's potential contributions do not merely result from the enforcement of international treaties and domestic law and manifesting themselves in court decisions; rather, the framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over the decades, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. This is why it is well adapted to address the challenges arising from AI.
However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. To remain relevant, it should tackle structural socioeconomic problems at the heart of the power dynamics that produce and reproduce AI-related challenges, including growing inequality between and within countries, and the transformation of the economic context and business practice as a result of the increasing data-intensive nature of economic decision-making. Complicating those challenges further, some of the reasons why AI produces problematic outcomes are deep rooted in technical intricacies that human rights scholars and practitioners are least likely to get involved in. In other words, proponents of the human rights framework should be willing to acknowledge its limitations, interact with other ways of thinking about the relationship between science and technology and human well-being, including more radical proposals for social justice, and learn about and intervene into the internal workings of the technology.
The paper assesses two streams of academic literature: works that address the nexus between AI and human rights explicitly (the “human rights and AI” literature) and works that raise rights-related concerns around ethical and responsible AI (the “responsible AI” literature). Philosophy, professional ethics, social sciences and humanities, and journalism feature prominently in this literature, as well as a few practitioner perspectives. Some of the cited works defend the use of human rights law and discourse in the regulation of AI; these works are referred to as the proponents of the human rights framework. Ultimately, the review of the literature identifies evidence for the usefulness of human rights for AI regulation and evidence for its potential shortcomings for an audience interested in the nexus of human rights and responsible technology.
Two caveats are in order. First, the discussion in this paper is necessarily generic. A technology with broad military, policing, retail, healthcare, and social media applications cannot be subsumed under uniform problem definitions or proposed solutions; therefore, the human rights framework is presented as a broad set of norms and rules that guide concrete proposals. Second, the paper acknowledges that there is an emerging literature on how AI can enhance the enjoyment of human rights (Lee 2020), as well as a somewhat more futuristic literature on the moral and legal debates around the prospects for granting rights to AI systems (Dworkin 2019; Risse 2019), or the applicability of the human rights norm for AI-to-AI relations (Ashrafian 2015), but focuses exclusively on the potential and actual negative impact of AI on humans for purposes of clarity and space.
The paper is organized as follows: the first section describes what human rights and responsible AI communities understand as the negative impact of AI. The second section defines the human rights framework and lays out some of its potential concrete contributions to AI regulation in light of scholarly literature on the subject. The third section describes the latest developments in domestic and international law to argue that legal remedies are still largely absent. The fourth section offers a brief overview of the legal, moral, and political developments that have jointly shaped the historical evolution of the human rights framework. Building on the lessons drawn from this historical evolution, the following two sections discuss the promises and challenges of using the human rights framework to regulate AI, respectively. The paper concludes with some suggestions for the interaction of human rights and responsible technology in the future.
The negative impact of AI: bias, lack of accountability, harm, disinformation, manipulation, and privacy concerns
Some of today's AI-powered technologies produce or amplify bias and discrimination against some of the most vulnerable members of the population, 2 like the poor, persons with disabilities, ethnic and racialized minorities, women, persons of diverse sex, sexual orientation and gender identity, children, the elderly, migrants, and people at the intersection of these identities (Buolamwini and Gebru 2018; Eubanks 2018; Molnar 2019, 2019; Noble 2018; O’Neil 2016; Zuiderveen Borgesius 2020), as they reflect multiple underlying problems, including nonrepresentative data, data that reproduce existing societal bias, and algorithmic amplification of bias (Birhane 2021; Crawford et al. 2019). Life-changing algorithmic output on recidivism, mortgage applications, and educational achievement may generate risks for accountability and due-process rights, as the data analysis process is often not transparent or explainable (Angwin and Larson 2016; Creel 2020; Donahoe and Metzger 2019; Eubanks 2018; Fry 2018; Martinez and Kirchner 2021). Furthermore, algorithms trained and tested on large quantities of data to extract maximum value from relevant information (Kerr et al. 2020) may threaten the right to privacy (Calders et al. 2013; Manheim and Kaplan 2019; van den Hoven van Genderen 2017), especially when a company's business model rests on data collection and processing (Penney et al. 2018).
Problems arising from recommender systems used by online search engines and social media, micro-blogging, and video-blogging companies have put civil–political rights, in particular the freedom of expression, at the center of controversies around AI. On the one hand, online platforms’ provision of (almost) unbridled access and connectivity has received much praise for promoting free expression and assembly (Castells 2009; Poell and Rajagopalan 2015; Tufekci and Wilson 2012). On the other hand, the amplification of disinformation through algorithmic reinforcement raises concerns (Kertysova 2018). Cambridge Analytica's collection of Facebook users’ data to influence their voting behavior right before the 2016 United States presidential election is an infamous example (Conyon 2022). Furthermore, online and offline mobilization fueled by disinformation and conspiracy theories threatens to erode public debate and interpersonal trust in democratic societies (Gregory 2019; Landon-Murray et al. 2019).
Ultimately, some AI applications may result in physical, even deadly, harm. For example, it is feared that lethal autonomous weapons systems, if developed and allowed to operate, will result in large-scale casualties, including of unarmed civilians, despite assurances of precision. Thus, concerned scholars and intellectuals have called for caution and reflectiveness (Russell 2015), a suspension of their production for the time being (Etzioni and Etzioni 2017), or full prohibition (Sauer 2016).
To sum up, today's AI-powered technologies may generate bias, infringe upon privacy, threaten civil–political rights, and put lives and livelihood at risk. Unregulated AI is feared to cause even greater risks in the future. Consequently, efforts to eliminate or limit the negative impact of AI through technical fixes, awareness campaigns, and regulation have abounded in the 2010s. Using the human rights framework to achieve these goals is a source of excitement and skepticism, as the following sections bring into focus.
The promise of the human rights framework: the proponents’ perspectives and beyond
I define the human rights framework as the ensemble of domestic and international laws, administrative rules set by specialized consultation, enforcement and monitoring bodies, and policies tasked to uphold fundamental rights. Typically, the framework operates through the enforcement of relevant domestic laws, ratified treaties, and administrative rules by each country's political and judicial authorities. However, the persistence of human rights violations around the world has highlighted the role of international organizations (e.g., the United Nations), domestic and international human rights associations (e.g., Amnesty International) in monitoring and protesting the situation of human rights, and international or regional courts (e.g., the International Criminal Court or the European Court of Human Rights) in sanctioning abuses.
The proponents of human rights as a regulatory framework to address the negative impact of AI praise its global outreach and capacity to generate broad-based and overlapping consensus (Gabriel 2020; Latonero 2018; Pizzi et al. 2020; Risse 2019) and to guide the practice of AI ethics (Sartor 2020) and AI-adjacent considerations like data privacy and protection (Beduschi 2022). In addition, the codification of human rights principles in domestic and international law is said to provide legal certainty in the context of inherently unpredictable technological change (Smuha 2021, p. 97). The appeal of human rights has gained traction in great part because self-regulation in the form of companies’ own ethical guidelines is judged to have failed to rein in the potential and actual harms (Metcalf 2019; Pizzi et al. 2020; Saslow and Lorenz 2019). Even though human rights cases usually make the headlines in the form of prosecution to ensure ex post facto accountability, the framework has evolved over time to incorporate preemptive defense of rights, which suits AI ethicists’ emphasis on safeguarding desirable goals by design and through stakeholder engagement (Felzmann et al. 2020; Penney et al. 2018).
In light of the proponents’ perspectives on the human rights framework, I identify three areas of potential contribution to rights protection: it can (a) provide the normative grounds of future technology legislation, policy, and business practice, (b) inspire novel definitions of rights, and (c) facilitate mobilization by social movements.
First, future legislation, policy, and business practice can be guided by the human rights norms. The link between human rights and legal, political, and business practice is in part aspirational. Recommendations for regulating AI through human rights mechanisms today include the ratification and enforcement of international treaties that guarantee the human right to science (Oberleitner 2021). At the level of business practice, it has been argued that impact assessments with a socioethical dimension should be grounded in human rights (Mantelero 2022). In the United States, where AI-specific legislation is stalled, the White House Office of Science and Technology Policy kicked off its public request for information on principles governing AI with the headline: “Americans Need a Bill of Rights for an AI-Powered World” (Lander and Nelson 2021) in 2021. To give more concrete examples of the link, the European Union’s (EU) proposed AI Act promises to protect the fundamental rights as defined in the EU Charter of Fundamental Rights. 3 The companion document to Canada's proposed Artificial Intelligence and Data Act mentions the federal Human Rights Act and provincial human rights laws as providing a robust foundation while arguing that the protection of human rights requires a law that addresses high-impact AI systems. 4 Thus, human rights serve not only as aspirational goals but also as concrete guidelines for laws and policies.
Second, the framework can accommodate novel rights, and through them novel protections, for affected individuals and communities. For example, one formulation of novel “algorithmic rights” includes “the right to algorithmic transparency and accountability; the right to be informed if we are interacting with an algorithmic system, and to have an explanation as to how that interaction works; the right not to be subject to a decision based solely on automated processing, including profiling; and the right not to depend on an algorithmic system for compliance with our fundamental rights” (Laukyte 2022, p. 2). In fact, the EU General Data Protection Regulation (GDPR) recognizes a host of data subject rights, including the right to be informed (Kaminski 2019). The “right to data protection” may be conceptualized as a new right emerging from statutes like the GDPR (for a critical overview of this right, see McDermott 2017).
Third, the human rights framework can also facilitate organizing by social movements focusing on responsible and/or ethical technology. Global human rights organizations, like Amnesty International and Human Rights Watch, and local human rights associations around the world have been developing organizational capacities, building alliances, and enacting mobilization strategies for decades to address a range of rights-related concerns (Finnemore and Sikkink 1998; Simmons 2009). Some of them have already been working on technology-related fields, 5 but the responsible or ethical technology nonprofits do not always center human rights in their analyses (Greene et al. 2019). Shared expertise and experience can strengthen the dispersed struggles for ethical and responsible AI.
The gap in AI-related human rights legislation
Despite calls to harness the normative, innovative, and mobilizational power of human rights, there exists a gap between the proponents’ expectations from the human rights framework and the degree to which these expectations have been incorporated into domestic and international law. The idea of responding to technological challenges using the human rights framework has been around (Perry and Roda 2016), but incorporating AI-related concerns into that framework has so far been piecemeal and fragmented. Despite pleas to update international law in light of AI challenges (Roumate 2021), international organizations have not produced binding treaties; instead, they have issued multiple resolutions and directives to address business responsibility, data governance, privacy, and so on.
The United Nations system offers a broad range of applicable, if vaguely defined, rights that can be interpreted as AI-relevant. The Universal Declaration of Human Rights is broadly cited as a generic, flexible, and agreed-upon document to derive a set of rights and obligations for the age of AI (Bachelet 2018). The Declaration is an intentionally generic document; thus, the specification of rights and obligations is left to other instruments. The International Covenant on Civil and Political Rights comes closest to an international treaty capable of anticipating some of the concerns around today's new and emerging technologies, AI included (Oberleitner 2021). Finally, arguably, the most consequential document from the standpoint of regulating private business is a set of guidelines: as calls to consider businesses alongside state actors as duty-bearers with human rights obligations has gained traction in the recent past (Abrusci et al. 2018), the UN Guiding Principles on Business and Human Rights (2011) has stepped in to set the standards for the roles and responsibilities of businesses—with implications for their development and deployment of technology.
The EU has taken the lead in legislating digital and AI regulation. Before the proposed AI Act of 2021, the regional organization had already legislated the GDPR in 2016, which came into effect 2 years later. Aiming to protect Europeans from the privacy risks of data-intensive technologies, the GDPR includes punitive ex post regulation with the principle of data protection by design (Article 25) (Barfield and Pagallo 2020) and Data Protection Impact Assessment plans (Article 35). Thus, when the AI Act was proposed in April 2021, the Commission's approach was to assume that AI was entering into legally regulated territory (Sartor 2020, p. 711). The proposed Act aims to ban a small number of AI systems that pose unacceptably high risks to fundamental rights while mitigating the risks arising from other systems through a mixture of ex ante impact and conformity assessments and ex post penalties. The proposal built upon the Commission’s High-Level Expert Group on Artificial Intelligence, which issued a set of definitions and guidelines, and ultimately a white paper on the regulation of AI. The guidelines list respect for human autonomy, prevention of harm, fairness, and explicability as four specific requirements for a fundamental rights approach to AI (Bærøe, Miyata-Sturm, and Henden 2020, p. 257). Finally, The EU's Digital Markets Act and Digital Services Act regulate some AI-related concerns, such as competition in the technology industry and content moderation on online platforms.
As other regions of the world are scrambling to include AI-specific regulation in their legal systems, it is worth noting that legal innovation has not been taking place in a vacuum (Desierto 2020); rather, some degree of AI-relevant legislation covering data privacy, consumer protection, and online content moderation does exist, and a combination of international treaty obligations and domestic pressure may oblige states to do more. The OECD's AI Policy Observatory lists 38 government initiatives for “regulatory oversight and ethical advice.” 6 To give a few prominent examples of directly or indirectly AI-related laws and policies, The United Kingdom's Data Protection Act (2018), which implements the GDPR in domestic law, has survived Brexit. Members of the United States Congress have drafted a slew of federal bills on digital services and platform accountability between 2019 and 2022, although none of them has been voted on yet. Instead, some states and city councils have taken the lead in legislating on data protection, bot transparency, and facial recognition. Canada's proposed Digital Charter Implementation Act of 2022 promises to regulate data privacy and AI concerns. 7 In sum, the legal regulation of AI is still in its nascent stages around the world, but the current trend favors future legislation.
Rethinking the evolution of the human rights framework: legal remedies and beyond
Before scrutinizing the ways in which the human rights framework can contribute to or fall short of regulating AI, it is important to trace the interlocking components of the framework in history. The premise of this paper is that legal remedies, moral justification, and political analysis constitute mutually supportive components of the human rights framework. Of course, human rights are best known as a list of international treaties, domestic constitutions, and domestic statutes, but reducing human rights to law and courts misses how and why relevant law takes shape and to what effect. Political, economic, military, technological, and cultural transformations over centuries have constantly pushed jurists, intellectuals, politicians, diplomats, and concerned citizens to rethink what human rights are and should be, what kinds of social and political contexts make the violations of those rights likely, and which instruments are most appropriate to address violations. The bills of rights, incorporated into early-modern constitutions first in countries like the United States and France, reflected a political analysis of illegitimate power relations between a potentially absolutist and arbitrary state and citizens/subjects and justified a minimal set of constitutionally recognized individual rights vis-à-vis state power. Conflicts between rights in concrete situations have prompted the intervention of constitutional courts, lawmakers, and democratic public to balance between rights claims over time.
The ascendance of nonstate powerholders, like private companies in capitalist societies, led to a rethink of the classical liberal view and, consequently, to the codification of an expanding set of socioeconomic rights in the twentieth century. The harms imposed on women, children, the disabled, ethnic and cultural minorities, and refugees by states, private citizens, and, in some cases, nonstate armed actors necessitated new ways of thinking and, eventually, led to the birth of a novel set of international treaties, domestic and international monitoring bodies, and specialized bureaucracies tasked with human rights investigation. Thus, an examination of human rights in the age of AI should take into account the long legal, moral, and political evolution of the human rights framework—especially the balancing of contradicting rights and public good claims, the inclusion of businesses as human rights duty-bearers, the innovation in human rights instruments, and the potential human rights risks in the context of rapid change, including change as a result of novel technologies.
Perhaps what makes the human rights framework so ubiquitous yet also so difficult to analyze in isolation from other legal, moral, and political frameworks is its flexibility: most legal and policy interventions, even when addressing something other than rights violations, interact with the human rights framework. For example, antitrust law seeks, first and foremost, to ensure competitive markets but can also be seen at least in part as a human rights instrument insofar as it affords protections to consumers. It is important, therefore, to bring into discussion laws and policies addressing human rights concerns exclusively alongside other laws and policies regulating businesses or technology that overlap with human rights to a greater or lesser extent.
The promises of regulating AI using the human rights framework
The broad-based recognition of the human rights framework, its promise of legal certainty, and flexibility begin to explain why its proponents claim it is appropriate for AI regulation, but it is necessary to delve deeper into the specific nature of AI tools to understand better the framework's promises and limitations. Today's AI, powered by large quantities of data and probabilistic algorithms originating from statistics, machine learning (ML), and research fields like computer vision, robotics, and natural language processing, presents unique challenges that require meticulous scrutiny of the legal, moral, and political evolution of the human rights framework. This paper argues that AI systems generate human rights risks that (a) interact with non-AI-related ethical, legal, and political challenges; (b) consider complex forms of responsibility above and beyond malicious intent; (c) require decisions about the balancing of harm and good; and (d) involve businesses and government offices in situations of diffuse moral and legal agency. The following discussion shows that the human rights framework can effectively help to address each component of AI-driven risk.
Interaction with non-AI-specific legal, moral, and political challenges
Conceptualizing the risks caused by AI as a human rights issue necessitates a profound understanding of the complex nature of these risks and harms. AI has become the catchphrase to capture all that is promising, but also dangerous, about new and emerging technologies. The primary goal of legal and moral frameworks addressing AI, human rights included, is to ensure that algorithms produce accurate, fair, and transparent results. A closer look into algorithmic risks and harms reveals that they arise from multiple causes, not all of which are strictly caused by AI. Obviously, many human rights violations around the world take place in the absence of an AI component. Even when AI technologies contribute to violence and violations, the specific responsibility of the AI component should be assessed rather than assumed.
Thus, understanding the negative impact of AI actually requires attentiveness to non-AI-specific considerations. For example, data privacy problems are not inherently AI problems, as not all governments and businesses that collect vast quantities of data use AI tools to process the data and when they do, it is questionable whether the AI component is responsible for the violation of privacy rights. Yet, it is also fair to argue that the privacy issues societies are facing today have become pressing in great part because AI systems’ thirst for data incentivizes data collection at any moral cost. Likewise, content moderation is a problem as old as the press, but the algorithmic amplification of false, manipulative, or malicious content necessitates a fresh approach that combines earlier considerations around content moderation with AI-specific problems. As these examples suggest, the challenges raised by AI systems intersect with related yet distinctive problems concerning data, digital platforms, journalism, robotics, warfare, biomedicine, and more. Some are AI-specific, but most are AI-relevant or AI-adjacent. Therefore, no single AI regulation can address these challenges in a satisfactory way. The appropriate response is likely to incorporate multiple value systems and policy interventions.
The human rights framework has a long history of interaction with other normative frameworks. Many human rights problems are covered under privacy law, data protection law, competition law, consumer protection law, criminal law, administrative law, and of course, international humanitarian law (for an appraisal of different legal systems in the regulation of AI, see Zuiderveen Borgesius 2020; Sartor 2020). The human rights framework can thus accommodate multiple value systems and multiple substantive issue areas and offer national legal systems sufficient leverage to frame the challenges arising from emerging technologies as partly covered by the existing legal system and partly requiring novel law and policy to address the problems specific to those technologies.
Complex responsibility
AI systems complicate the nature of moral and legal responsibility. Producing or using technology with the explicit objective of causing harm offers a simple benchmark for responsibility. Governments using facial recognition to arrest or kill dissidents or content creators willfully producing false, misleading, or violent content to maximize engagement deserve to be accused of malicious intent in a straightforward way. However, AI harms do not always result from explicitly malicious decisions on the part of the developers, platform providers, or end users. Algorithmic harm usually includes a mixture of negligence, unconscious bias, and willful ignorance of potential harm. As the headline-grabbing scandals suggest, the unpredictability of algorithmic output prompts some of the harms, but ultimately, even when businesses can count on foresight with regard to harm, their single-minded focus on profit maximization and shareholder value leads to morally and legally questionable decision-making.
The human rights framework can effectively address the complex nature of responsibility arising from emerging technologies in great part because the framework has also evolved to both sanction malicious intent and acknowledge less obvious forms of responsibility. Obviously, the human rights framework tends to make the headlines when intentional and massive violations committed by state or nonstate perpetrators prompt a demand for retributive justice. However, the framework can provide guidance even in the absence of intent to harm.
Thus, it has been suggested that the future of human rights violations will be “subtle, diffuse, and sophisticated” (Soh and Connolly 2021). Human rights can acknowledge and sanction complex responsibility in a variety of ways. For example, criminal law can prosecute humans who produce, program, market, and employ robots for negligence in the absence of intentional harm (Gless et al. 2016), but limited liability schemes under civil law are likewise avenues for justice for unforeseen consequences and negative externalities. As the increasing attention to preemptive measures like impact and risk assessments, and value-sensitive design suggests, there exists an affinity between the human rights framework's focus on ex ante measures in situations of complex responsibility and contemporary responsible technology discourse.
Balancing of harm and good
A related issue is the conceptualization of the potential harm in relation to the potential good. AI systems are generally viewed as potentially beneficial tools that may generate unintended, unforeseen consequences or foreseeable negative externalities. Needless to say, there are, on the one hand, technical back-end applications of AI with little or no risk (say, an AI system deployed to speed up a computer system or a server) and, on the other hand, applications that are overwhelmingly harmful (say, an AI-powered lethal weapon system with no human oversight and low precision). However, most applications debated in the context of AI regulation tend to embody the promise of benefiting at least some people while imposing potential or real harm to others. As Land and Aronson (2020, p. 236) state: “for technology to have a transformative effect, we must be far more mindful of who builds it, for what purposes, and what kinds of power and privilege are embedded within it.” In addition to their distributional consequences, AI systems may introduce conflicting goals. For example, respect for data subjects’ privacy rights and maximizing accuracy may be such conflicting goals, as privacy rights are likely to set limits on how much and what kind of data can be collected. Therefore, regulating AI requires balancing between a multiplicity of interests and values.
The balancing approach is something the human rights framework is particularly well placed to accommodate, as it has evolved to work through multiple values alongside human rights, such as public safety and business efficiency, and to address the potential for conflict between different rights claims (Pizzi et al. 2020, p. 163). The framework acknowledges absolute rights (e.g., freedom from torture) from which derogation is categorically disallowed and thereby balancing should not be allowed. However, many human rights are interpreted as qualified rights (e.g., the right to privacy) that may be balanced against other rights or public goods. In fact, constitutional courts and courts of appeals tend to conduct balancing analysis all the time. The human rights framework, therefore, can be extended to an analysis of AI benefits and harms along similar lines.
The risks posed by new and emerging technologies may cross the threshold of what is tolerable, making the elimination of risk by banning the technology a plausible option—as the EU's AI Regulation suggests in a small number of applications, like social credit systems. Where balancing is possible, multiple interests and values should be considered. Kriebitz and Lütge (2020, p. 87), for example, recommend using the principle of consent (i.e., the individual giving consent to the transfer or rights), harm principle (i.e., risk of harm to third parties), and the principle of proportionality (i.e., the notion that the transfer or right should be proportionate to the potential benefits of the transfer). The specifics of balancing will of course fall under the authority of future lawmakers, democratic publics, and courts, but the general argument stands: the human rights framework is well positioned to equip decisions concerning contradicting rights claims around AI with legal, moral, and political justification.
Diffuse moral and legal agency
The producers, vendors, and end users of AI systems are usually different businesses (not to mention different teams and individuals within businesses), as are the data controllers and data processors. Likewise, those affected by the negative impact of AI systems may be intermediaries (such as online platforms), end users, or even individuals who are not end users (e.g., mortgage applicants working with a bank using AI-generated creditworthiness scores). When these systems produce negative impact, multiple private actors of varying sizes may be at fault, in addition to government agencies that fund, develop, use, and oversee the technology in question. What is more, AI systems by definition involve decisions by autonomous nonhuman agents, which complicates the determination of moral agency further.
While there is no easy and general way to assign moral agency, the determination of morally and legally responsible agents is a challenge that the human rights framework has increasingly adapted to, both thanks to its state-centric origins and its evolution away from them. Human rights law remains by and large state centric in its conceptualization of individual liberties, allocation of responsibilities, and prescription of enforcement power. This may be said to weaken the power of human rights in regulating new technologies that are primarily developed and deployed by private businesses (Liu 2018, p. 210). However, there are at least three reasons why the orientation toward state responsibility and enforcement matters: first, military institutions have been among the chief promoters of computer science technology and, increasingly, AI tools (Liu 2018, p. 200; Heller 2021), thereby shaping the evolution of technological change. Second, the conventional role of human rights as regulating state–citizen relations has become increasingly relevant for protecting citizens from technological harms, as state institutions operating under a variety of regime types have been justifying pension, healthcare, policing, and judicial decisions on algorithmic output (for policing, see Brayne 2017). Finally, states enjoy substantial (albeit not unlimited) power to set the standards for private business. Therefore, even the restricted notion of human rights as holding state actors accountable has much to offer in the regulation of AI.
What is more, it is acknowledged that threats to rights have changed over time (Gabriel 2020), and that human rights practice has adapted to acknowledging the responsibility of nonstate actors, businesses included (AlgorithmWatch 2022; Ebert et al. 2021). In fact, the notion that businesses are duty-bearers (Abrusci et al. 2018; Donahoe and Metzger 2019) is becoming more entrenched in human rights law. Normative scholarship on business practice has been increasingly appreciating the “interlinkages” with human rights (Deva et al. 2019), including calls for going above and beyond the due diligence framework to center corporate ethics in human rights (Gregg 2021). Thus, the human rights framework is likely to interact with specialized areas of business-related law, like data privacy law (Zuiderveen Borgesius 2020) and torts law (Pasquale 2019), 8 business-related institutions, like regulatory bureaucracies (Calo 2015) and public procurement (Beduschi 2020), and business-related practices, like standardization (Beduschi 2020), and value-based design (Aizenberg and van den Hoven 2020).
The challenges of regulating AI using the human rights framework
The previous section defends the appropriateness of the human rights framework for the regulation of AI. While the framework is on the whole useful, it is also necessary to point out that it suffers from gaps and weaknesses that are not insurmountable yet still require a rethinking of the framework's limitations. Before digging deeper into the complexities of AI technologies, it is worth highlighting that nonratification and noncompliance remain serious problems despite the fact that human rights norms are known, understood, and operationalized at a much higher rate than any other applicable ethical framework. Nearly eight decades since the adoption of the Universal Declaration, many countries have either not ratified the fundamental rights treaties or not incorporated them into their domestic legal systems adequately. To give a relevant example, the United States, home to most Big Tech companies and AI startups, has not ratified the International Covenant on Economic, Social and Cultural Rights, which, among other things, acknowledges the human right to enjoy the benefits of science and technology (Chapman 2009; Haugen 2008). In other words, the world into which today's AI technologies were born is characterized by the limited codification and even more limited implementation of fundamental rights, rather than one of global respect for them.
In addition to problems haunting all human rights practice, AI systems produce specific challenges that the human rights framework in its current form is inadequate to address. The identification of these challenges is premised on the notion that technologies directly and indirectly related to AI constitute a “sociotechnical system” that generates impact above and beyond what individual technological applications do (Magrani 2019). This paper identifies the following specific challenges: (a) structural problems like inequality between and within countries may require normative visions and concrete practices that complement or supplement the human rights framework; (b) data-intensive business models may produce intrinsic harms that go above and beyond the ability of the human rights framework, with its emphasis on case-by-case procedural remedies, to cope; and (c) the negative impact of AI requires a much deeper involvement with scientific and technological debates than what the proponents of the human rights framework tend to acknowledge.
Increasing inequality within and between countries
Today's AI tools are produced and used primarily in a few global hubs (North America, China, and Europe), and within those geographies, by a small number of companies with the data collection, storage, and processing infrastructure and capabilities. The move toward centralization and monopolization runs the risk of exacerbating the North–South divide in the production and enjoyment of technology, as people in the Global South face the prospect of exclusion from developing the AI tools, having their interests and values ignored in debates on AI (Wong 2020), and ultimately being misrecognized or targeted by some of these tools (Sandvik 2019).
Emerging technologies may entrench existing inequalities within societies, as well (Land and Aronson 2018). Technological transformations since the 1970s have taken place in the context of widening income and wealth inequality in the Global North. Even if technology cannot be singled out as its cause, it is by now clear that the production, consumption, and distribution practices accompanying modern technology do not by themselves address these issues. Furthermore, inequality involves more than individual differences in financial assets: the information asymmetry between data-intensive businesses and ordinary citizens has been widening, too (Kriebitz and Lütge 2020).
There is no doubt that the human rights framework can be expected to regulate, and in some instances constrain, spheres of life in which power relations are embedded (van Veen and Cath 2018; Walters 2001). It acknowledges power differentials in society as potentially suspect. To be more specific, the defense of vulnerable persons in the face of a narrowly construed set of powerful agents (including robotic ones) grounds ethical considerations around emerging technologies in a realistic discussion of power and accountability (Liu and Zawieska 2020). However, the framework is better adept at identifying specific, case-by-case problems arising from the use of technology than transforming the underlying conditions that produce or exacerbate those problems. Human rights advocacy is no stranger to movements for income and wealth equality, and gender, racial, and environmental justice, but still, the framework has not succeeded in incorporating substantive and transformative notions of justice into its modus operandi. For example, the enforcement of socioeconomic rights has remained minimal, piecemeal, and arbitrary. As the push toward stopping the worst excesses of specific technological innovations gains pace, proponents of the human rights framework should not lose sight of these broader demands for equality and justice.
Rethinking the data economy
It has been argued that many sectors have become data-intensive in the modern economy. The said “datafication” of the economy (Sadowski 2019) transforms design, production, marketing, and sales decisions. What is more, data-intensive technologies do not only quantify and predict human characteristics and behavior; they make humans more quantifiable and predictable (Unver 2017). This kind of transformation does not come from any particular decision or event at the company or sector level; rather, the cumulative effect of minor changes generates large-scale societal transformations over time. Writing about autonomous vehicles, Liu (2018) defines what he calls the “holistic impact of AI” as the “cumulative, structural, and systemic impact of autonomous vehicles that operate in a network” (p. 200). Regulating any one decision or business practice may not suffice to mitigate or eliminate these long-term, cumulative harms.
Furthermore, it is worth debating the possibility that the more these technologies control the flow of information and orient human behavior in line with the data demands of the underlying business models, the more they are likely to entrench and deepen power differentials among citizens and between citizens and businesses or governments. It has been suggested that corporate power can exploit its position to burden consumers and workers with the harms arising from the use of faulty data unless regulation cultivates responsibility (Pasquale 2019, p. 1920). While not strictly about AI, the emerging business model that is crystallized under the term “sharing economy” operates precisely thanks to such information and power asymmetries (Calo and Rosenblat 2017). Some AI-driven systems, like Meta's (formerly Facebook) combination of recommendation and advertising tools, have been criticized for amplifying false, misleading, and hateful content as an intrinsic feature of their business model rather than as an unfortunate side effect of it (Vaidhyanathan 2018).
As with addressing inequality, intervening into AI systems and the data-driven business models they are embedded in is likely to fall outside the human rights framework's capabilities. The framework has evolved to develop responses to business wrongdoings over time, but still, it is deferential toward individual business decisions insofar as they do not produce harms recognizable by the law. As the datafication of the economy and society and the conversion of large segments of labor into gig workers continue unabated, the framework may remain confined to mitigating the worst excesses while leaving intact the structure making those excesses possible. Thus, its likelihood of addressing the negative impact of cumulative, transformative change hinges on its ability to move beyond its own case-by-case, procedural, and deferential approach to regulating the technology business.
Regulating AI as a technical field: error, explainability, and scientific foundations
The highly technical nature of today's AI systems should prompt a discussion about whether and how much the human rights framework should be used to get involved in the nuts and bolts of AI development. Examples concerning statistical error, the explainability of algorithms, and problematic scientific foundations of some AI systems show that human rights practitioners may have to get more involved with technical discussions to address the limitations and inherent flaws of AI systems.
Most contemporary applications of AI are probabilistic in nature; that is to say, the algorithms derive from (predominantly Bayesian) statistical models that iteratively calculate the likelihood of achieving substantively desirable outcomes (Russell and Norvig 2013). The fit between the model and new data is calculated probabilistically. In other words, some degree of error is intrinsic to estimations and predictions. From potentially affected citizens’ point of view, a medical test should not produce too many false negatives (i.e., failure to diagnose an actual case of illness). Likewise, a crime prediction algorithm should not produce too many false positives (i.e., predict nonoffenders as likely criminals), but the operators of such a system (i.e., the law enforcement community) may not prioritize the perspective of the adversely affected. In other words, there are multiple definitions of error built into these systems that defy simple identification of the risk of harm coming from AI systems. Proponents of the human rights framework should thus be much more attuned to the multiple definitions and political implications of error in AI systems.
AI algorithms perform classification, prediction, and inference tasks. These tasks typically rely on complex and iterative mathematical calculations that defy simple interpretations of causal inference, and in any case, the underlying causal processes do not always interest researchers and practitioners. In simple terms, the developers of these technologies do not always know how or why they work. Given that some applications of AI are technical, perhaps interpretability and attention to causal mechanism are not always pressing problems. However, failure to understand and explain potentially life-transforming decisions raises concerns about due-process rights. A 2016 Wisconsin Supreme Court case (State of Wisconsin v. Eric L. Loomis), in which judges refused to invalidate a conviction that was in part based on that software's assessment of recidivism risk, was argued over such considerations. 9 Yet, the principles of explainability and transparency, touted in the AI ethics and human rights communities as potential remedies to the risks arising from algorithmic decision-making, may fail to deliver the expected results, not only because the proprietary nature of the data and the algorithms (Stilgoe 2018, 57) puts the human rights demands at odds with intellectual property rights but also because full explainability, interpretability, and transparency may simply be extremely challenging if not impossible from a technical standpoint (Rossi 2019, p. 127). This is not to dismiss the demand that an algorithmic decision be made as transparent as possible but, rather, to point to the inherent limits of full transparency in this line of work. 10
So far, the discussion is centered on AI systems that deliver some desired substantive outcome within a range of error. A greater risk is when those systems deliver outcomes that are simply wrong by factual standards. Obviously, most AI systems are expected to produce reasonably accurate outcomes verifiable by area experts, but the risk of failure at least in some AI applications cannot be altogether dismissed. For example, some AI and AI ethics researchers have been warning against emotion recognition as an inherently flawed technology devoid of scientific foundations and reminiscent of early-modern pseudoscience of phrenology (Crawford 2021), yet this has not stopped academic journals from publishing articles and companies from funding AI systems that use emotion recognition. From a human rights standpoint, this is not merely a question of eliminating or mitigating negative impact; rather, it is a question of whether one area of AI-driven technology, namely emotion recognition, should be banned entirely.
Moreover, the lack of attention to causal mechanisms, described above, can produce outcomes more sinister than opaqueness: as the number, interrelatedness, and autonomy of AI applications increase, their unpredictability may produce multiplier effects. At their worst, the optimization functions underlying AI systems may end up reinforcing the problem of endogeneity (Zittrain 2019) and thereby producing substantively incorrect output. As Cheshire (2017) puts it, AI systems may encourage “loopthink,” understood as uncritical, unreflective decisions produced by autonomous agents. For all the reasons described in this paper, substantively incorrect output of this nature is as much a human rights problem as it is a business problem.
In conclusion, there are ongoing debates about the scientific validity of some AI systems. As discussed earlier, the human rights framework does not typically ban entire lines of business or scientific (or pseudoscientific) inquiry. Yet, allowing businesses and researchers to self-regulate practices with serious flaws and potential negative consequences runs the risk of driving human rights to irrelevance. Again, regulating only the worst excesses may not always be a workable strategy. Thus, human rights scholars and practitioners may have to participate in technical AI debates to bring the promise of human rights up to date in a fast-changing technological landscape.
Conclusion
Since the adoption of the Universal Declaration of Human Rights (1948), if not before, human rights have embodied aspirations of equality, freedom, and justice for all and, all too often, their frustration. The rapid changes fueled by contemporary AI systems signify yet another challenge. I argue in this paper that the human rights framework is endowed with conceptual, normative, and legal tools to address the challenges caused by AI systems, which contain non-AI components that complicate the nature of ethical, legal, and political challenges, produce unforeseen consequences and negative externalities, result from intentional or unintentional choices, require a balancing approach to harms, and involve multiple businesses (and at times government agencies) as potential duty-bearers. For all its potential, however, future discourse and practice around the human rights framework should also take into account three core problems: AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science and technology.
I believe the conclusions of this paper echo some of the findings and practical recommendations of the AI ethics and human rights research and scholarship communities. Assuming that some businesses are at least minimally interested in enacting ex ante impact and conformity assessments reflecting attentiveness to human rights, frequent and systematic interaction between the AI and human rights communities, human rights training for AI developers, and the formation of multidisciplinary teams can be beneficial (Donahoe and Metzger 2019; Howard and Borenstein 2018; Risse 2019). Such interaction should address shared agendas as well as divergence between the human rights framework and other normative frameworks, or what Floridi (2018, p. 5) calls “soft ethics.” However, human rights work cannot be left to the goodwill of businesses alone. If and when AI-specific laws sanction negative impact resulting from AI systems, human rights norms embedded in national constitutions and laws and international treaties should guide those laws for the reasons explained in this paper.
Legal regulation in general is criticized for seeking to rectify particular classes of problems while failing to address the social–structural causes underlying them. For that reason, if or when rights-affirming AI treaties and laws are legislated, politicians, legal professionals, and human rights practitioners should take the cumulative negative impact of AI applications and the broader structural problems identified in this paper seriously. Learning about the technical intricacies of AI systems and their connection to economic, political, social, and cultural power relations will be necessary to address future challenges effectively.
This paper offers a bird's-eye view of the relationship between human rights and responsible technology in 2023—a time of little or no AI-specific legislation around the world. Given these scope conditions, future research should examine the impact of specific laws, policies, and court decisions on the conduct of AI developers, deployers, and users, especially businesses and governments. The future will bring, in the words of legal scholar Matthijs J. Maas, a combination of legal development, legal displacement, and legal destruction (Maas 2019); therefore, research should also take into account the ways in which social movements, politicians, businesses, developers, and other concerned actors shape the process and content of future legal and political developments.
Footnotes
Acknowledgments
The author would like to thank Peter J. Verovšek for his feedback on an earlier version of this paper.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
