Abstract
In this article, we show how the European Union styles itself as the sole political actor able to effectively protect its citizens from the threats engendered by new technologies while balancing the relations between digital markets and the member states through ground-breaking regulations. To this end, we trace manifestations of this approach within the proposed Artificial Intelligence Act. We argue that the Act, while evoking the value of fundamental rights protection, does not support genuine emancipation of citizens in the digital world, as it hands over the issue of protection to expert bodies and institutions. The European Union's unique approach to regulation of the digital economy does not reflect the triple movement postulated in Nancy Fraser's critique of Karl Polanyi's double movement as it does not include solutions that would allow the societal actors to enforce their rights concerning artificial intelligence. Thus, the Act serves mostly as a tool to build the European Union's political position.
Keywords
Introduction: The European Union's Digital Agenda
The most powerful global companies that dominate both economic rankings and markets today are of one particular kind: they are big technological platforms offering digital services and products, basing their business models on datafication, sourcing value and utility of data. Setting the rules for an inherently globalized and disembedded digital economy requires political agency at the supra-national level. At least, such a line of argumentation has been taken by the European Union (EU) since 2010, a symbolic censure demarcated by the communication A Digital Agenda for Europe (Commission, 2010), which for the first time laid out the EU's legislative plans for new technologies. 1 In this area, as in many others, by wielding its economic potential and large consumer pool, the EU has been able to induce market players to adjust their policies to the new standards – at least to a certain extent. Additionally, the so-called Brussels effect (Bradford, 2012, 2020) has encouraged many non-European states to emulate the EU's approach in numerous policy fields (Greenleaf, 2021; Gunst and De Ville, 2021; Mukiri-Smith and Leenes, 2021; Rustad and Koenig, 2019).
In this article we assume that the EU's activities related to the regulation of digital markets have been consonant with the essential blueprint of European integration, which consists of three parallel strands, namely (a) market-enhancing integration, (b) market-shaping integration and (c) the creation of a European area of non-discrimination (Höpner and Schäfer, 2012). The first strand, also present in the EU's strategies concerning digital technologies, is focused on the removal of obstacles that fragment the market: Europe is still a
In fact, this approach is no different than those applied in previous iterations of market integration: the EU aims at building the digital single market by way of combating the fragmentation of the regulatory framework adopted by the states (e.g., Commission, 2015: 17). Within the first and the second strand of integration the EU assumes the role of the only political actor able to integrate the market and simultaneously shape it. In this article, we aim to show that to legitimize this ambitious political role, the EU instrumentally emphasizes the values guiding the third strand of integration activities which aim at creating a European area of non-discrimination (Höpner and Schäfer, 2012).
Initially, EU law tackled discrimination on the basis of nationality, as different treatment based on this criterion would undermine the very foundation of free movement. The first area in which prohibition of discrimination on grounds different than nationality was implemented, was a provision concerning equal pay for men and women for the same work. It served as a tool for ensuring that states would not be able to use gender differences in pay as a competitive advantage (Crovitz, 1992: 478). With time, the EU expanded its ambitions in relation to non-discrimination, both in terms of the grounds on which discrimination is prohibited (e.g., ethnic or social origin, sexual orientation), and in terms of the scope of areas that are covered by the EU's anti-discrimination regulation, such as access to goods and services, or employment and occupation (Belavusau and Henrard, 2019; Ellis and Watson, 2013; Schiek, 2002). The adoption of the Charter of Fundamental Rights of the European Union (2012) confirmed the status of non-discrimination as well as other fundamental rights as principles and rights applicable to the whole body of EU law and not merely a tool that is supposed to support market integration. The Charter – and, therefore, fundamental rights – has been ‘placed at the heart of the EU legal system’ (Sánchez, 2012: 1565).
Nowadays, the challenges related to anti-discrimination are especially evident in the application of new technologies. Considering the EU's interest in addressing regulatory issues connected to technological advances in its legislative proposals, it seems only natural that the Commission attempts to keep pace with the constant emergence of new technological solutions and the challenges they raise. One example is the proposed Artificial Intelligence (AI) Act (Commission, 2021b) containing provisions that aim to curb the double-edged threats to non-discrimination posed by both the market (e.g., the surveillance of consumers by companies that may result in discriminatory treatment) and nation states (e.g., the surveillance of citizens by states that may result in discriminatory treatment).
In this article, we propose to interpret the EU's political agenda in the area of the digital economy through the lens of the triple movement concept put forward by Nancy Fraser on the basis of the well-anchored concept of double movement introduced by Karl Polanyi. Reusing this framework not only helps to theorize the interplay between the different strands of European integration, but it also shines a light on the political agenda of the EU and how it tries to legitimize its upper hand in relations with both nation states and companies by referring to its moral high ground in the area of non-discrimination. To this end, we first concisely outline the concept of double movement turned triple by Nancy Fraser. Then we try to show why the appeal to non-discrimination plays well as the grounds for building political legitimization of the EU in the context of the digital economy. In the core part of the article, we emphasize the overlaying rhetoric of non-discrimination within one of the newest EU's attempts at regulating digital technologies, namely the proposed AI Act. We then proceed to discuss why non-discrimination provisions within the Act amount to a fictitious triple movement, still deftly used by the EU to legitimize its claims towards nation states and the digital market. We show that in the proposed shape of the AI Act non-discrimination becomes an instrument of the EU's policy, not an emancipatory tool for its citizens.
From Double to Triple Movement
When theorizing the relations between the state and the market it is almost a reflex response for many social researchers to reach out to a handy, if Hegelian, framework proposed by Karl Polanyi (regarding the EU, see Ashiagbor, 2013; Bogoeski, 2021; Caporaso and Tarrow, 2009; Goldmann, 2017; Höpner and Schäfer, 2012; Jones, 2003; Mabbett, 2014). The Polanyian perspective asserts a curious dynamic binding the state and the market, called the double movement. To operate, the market needs the institutional orderliness provided by the nation state, as it serves to engender and sustain such crucial components of economic activities as nature (epitomized by land), people (who deliver labour) and norms of exchange (which could materialize in the form of money). This does not, however, stop markets from relentlessly commodifying land, people, and money. As a result, they are turned into “fictitious commodities,” ones not produced for sale, but which are nevertheless bought and sold. Economic systems simply cannot exist outside societies. In Polanyi's words, they are, as a rule, embedded in social relations (Polanyi, 2001: 279). In the course of the ‘great transformation’, however, the embedded economy (based on tradition, redistribution or reciprocity) is being replaced by a disembedded economy driven by the logic of gains.
The state cannot afford to lose its political legitimacy over its territory, citizens, and finance to commodification which artificially excludes them from the social and political structure. Pressured by society, it sets out to mitigate the negative effects of commodification by introducing laws preventing the appropriation of land, shielding the rights of workers through social protection mechanisms (e.g., factory laws and social legislation – Polanyi, 2001: 87; Knegt, 2017 – or the political and industrial working-class movement), and restricting the flow of money. The market is always trying to disembed – or detach – itself from its social environment by actively skirting around the rules imposed by the state, and the state strives to re-embed markets in – or subordinate them to – its political constraints. The market and the state co-create and oppose each other, but this seesawing of double movement from disembedding to re-embedding may go awry: completing his magnum opus in the 1940s, Polanyi witnessed how the struggle between the state and excessive commodification may result in authoritarian rule.
We assume that the basic logic of the double movement is still, to a certain extent, viable and theoretically applicable: political actors aim to structure the activities of economic actors, while economic actors are bent on commodification. At the same time, we go along with thinkers such as Nancy Fraser in noticing that it insufficiently embraces the complexity of social reality. Fraser expressly notes that the state is not always an ally of the people, nor the market its enemy (Fraser, 2011). The state can, as well, solidify economic and social injustices. Its laws may be shaped to perpetuate the subjugation of women or social minorities. In authoritarian states, certain social groups may be completely excluded from social protection. And marketization may bring emancipation, for example for women who enter the labour market which results in their previously unpaid work becoming commodified labour. Fraser's development of Polanyi's theory shows that there is a need to consider the ongoing evolution – or rather co-evolution – of the state, the market, and the society.
More importantly, she suggests adding an additional axis of analysis, thus making the double movement a triple one: namely, the normative aspiration to ‘remove obstacles that prevent some people from participating fully, on a par with others, in social life’ (Fraser, 2011: 149). Emancipation gains ground through activities undertaken by social movements and other grassroots actors seeking equality and freedom within the public sphere of civil society, bent on fighting domination perpetuated by both the market and the state. The public sphere thus becomes ‘a testing ground through which the norms infusing social protection may be forced to pass’ (Fraser, 2011: 148). This ‘civil publicity’ reshapes the relationships between society and the state, as it proposes an alternative and more participatory-democratic fashion of implementing social protection, replacing the standard top-down state policies.
Although Fraser's definition of emancipation is somewhat elliptical, it combines both the aspect of lack of domination and participatory parity (Sparsam et al., 2014). We argue that, when translated to legal norms, emancipation includes, on the one hand, substantive provisions concerning non-discrimination, and, on the other hand, the procedural solutions which ensure that individuals and the civil society at large are empowered and able to enforce their rights. Like both elements of the double movement, which took form of legal reforms simultaneously enabling and setting boundaries for commodification and disembeddedness (Caporaso and Tarrow, 2009: 595–596), requirements for emancipation may translate into legal provisions on equal rights, prohibition of discrimination, or guarantees of specific positive rights for unprivileged groups, supporting their empowerment. However, in order to be genuinely emancipatory, these solutions must be accompanied by mechanisms that enable individuals and societal actors to enforce them.
In this article, we propose to look closely at regulations that support emancipation within the emerging legal framework concerning the use of digital technologies. We link the elements of the triple movement with the three strands of EU's integration (Höpner and Schäfer, 2012): market-enhancing integration supports commodification, market shaping goes in the direction of social protection, and non-discrimination is an indispensable element of emancipation, in Fraser's terms. However, our analysis reveal that there are other indispensable elements constituting the regulatory framework supporting emancipation, such as rights enabling the enforcement of legal protections, which are missing from the third strand of EU's integration in the context of AI.
The Rise of the Digital Economy and New Threats to Emancipation
Although we previously declared the belief in the viability of the basic logic of double movement, it is quite evident that regarding the digital economy the double movement is out of joint. Left to their own devices, nation states face mounting difficulties with subjugating the increasingly globalized markets to their locally anchored rules. The saturation of private and public spaces with digital devices, from smartphones to smart CCTV cameras, is propelling the datafication of nearly every social activity. Billions of data points, continually generated by connected users and connected machines, are collected by tech companies to be sold and then used to come up with predictions, to personalize products and services, and optimize production and sales throughout the value chain. Through this ‘data gaze’, companies obtain direct access to the most intimate aspects of the lives of consumers. Ubiquitous computing entails ubiquitous commodification, engulfing even those areas of our social life that so far have remained relatively intact. 2 It may even be argued that data – serving as a meta-representation of the social, the economic, and the political – is becoming a new, fictitious commodity in Polanyian terms, as it was not produced for sale, but nevertheless became amenable for sale (see recent studies by Athique, 2020; Bottis and Bouchagiar, 2018; Grabher and König, 2020).
Moreover, in the new iteration of the knowledge economy dubbed the digital economy, other fictitious commodities, particularly labour and money, are being thoroughly datafied (see Chen et al., 2020; Kenney et al., 2021; Terranova, 2000; Tubaro, 2021; Wood et al., 2019), i.e., they have entered the market through the process of datafication. Datafied labour is traded through digital platforms, and datafied money (turned bitcoins) is adding to liquid financialization on a global scale. Communication through digital devices integrates different processes of commodification, creating one single globalized market process. In other words, in the digital economy commodification goes both deeper and wider.
Additionally, digitalized markets are much more disembedded than traditional ones. The operating model of tech companies that have masterminded the process of commodification through datafication is based on the very assumption of disembeddedness.
3
Platforms specifically emphasize that their operations boil down to virtual intermediation and linking of the sides of the market, and naturally that they are not responsible for any negative outcomes. In other words, there is no ‘material core’ to their business activities that can be subjugated to the localized rules of the nation states: Digital platforms obviously challenge the law, and this is a key feature and consequence of their operations. They like to show how the law is out-of-date with the new economy, and they even appear alien to the law. Indeed, they tend to negate the territorial aspect of the (State) law. To be constrained by rules applicable on a national territory appears an anachronism for platforms which have a global perspective and outreach (Strowel and Vergote, 2018: 9).
Normative disembeddedness is deftly practiced by global platforms operating in the labour market (Wood et al., 2019) and those offering intermediation in ride-hailing (Katta et al., 2020; Rekhviashvili and Sgibnev, 2018) and accommodation markets, which often results in the harsh alienation of workers and multi-pronged collateral damage to local markets. Rising adoption of AI technologies in data analysis processes by companies across-the-board is creating yet another level of challenges for attempts to re-embed the digital economy. The lack of clarity regarding who bears responsibility for actions taken with the use of AI, or the inability to trace the reasoning behind decisions made using AI-based systems, are just a few examples of the problems that broaden the gap between the rules governing the (disembedded) digital world and those that are applied by traditional institutions based on existing legal frameworks.
This normative and factual disembeddedness of the digital markets combined with ubiquitous commodification through datafication is becoming the primary threat to emancipation in the areas governed by digital solutions. Non-discriminatory participation in social life permeated by digital infrastructures turns out to be illusory and elusive. The datafication process is pervaded with structural discrimination hard-wired into the content of databases and the construction of algorithms. Individuals, both as citizens and as consumers, are unable to control the process of datafication. Indeed, often they are not even aware of it, or else they unwillingly ‘pay’ with their data for the comfort of using digital services. The power of the state is becoming increasingly inadequate to the challenge of dealing with powerful digital companies. Due to the mind-boggling pace of the cumulative technological innovation that is taking place as part of the emerging fourth industrial revolution, as well as efforts of Big Tech, political actors are suffering from ‘cultural lag’, meaning that cultural and institutional norms cannot keep pace with technological changes, and regulation is always playing catch-up (Ogburn, 1957).
The picture is further complicated by the fact that public institutions’ infrastructures have become increasingly entwined with digital infrastructures provided by Big Tech (Grabher and König, 2020). Nation states use digital technologies to optimize their administrative processes and personalize their range of public services. But they also quickly recognized the potential of datafication in the core area of their traditionally construed sovereignty – upholding public order through control and surveillance of their own citizens. The oft-cited case of the Social Credit System that is being introduced in China may be complemented by numerous examples of states using digital technologies to undermine emancipation: in supposedly democratic Poland, a member of the EU, the government allegedly used the Pegasus systems to spy on its own citizens.
In the context of the digital economy, Nancy Fraser's argument seems strongly validated. Both the state and the market may harm emancipation, understood as participatory parity and non-domination, through the use of digital technologies. In the following section, we will demonstrate that the EU recognizes the necessity of protecting its citizens with respect to certain aspects that constitute the regulatory dimension of emancipation. In accordance with the third strand of integration (non-discrimination), the EU endeavors to regulate digital markets by concurrently shaping and enhancing them.
Manifestations of Triple Movement to European Integration in the AI Act
The manifestations of rules aimed at safeguarding market operation (market enhancing integration), social protection (market shaping integration), and emancipation (which includes the prohibition of discrimination) are inscribed within the draft AI Act, published by the Commission in April 2021. The AI Act proposes to adopt a horizontal legislative instrument that would be applicable to all AI systems launched or used in the EU.
4
A thorough analysis of the AI Act identifies how the proposed rules seek to re-embed digital markets within the socio-political context of European integration, which attempts to balance marketization with social protection enriched by an emancipatory approach, simultaneously recognizing the new dangers coming from the state when it is fortified by digital technologies. This tension permeates the introduction to the proposal: The purpose of this Regulation is to
The main gist of the Act revolves around three main propositions:
to enhance the (digital) market by preventing states from fragmenting it through particularistic regulations – thus legitimizing the need for a strong regulatory hand on the part of a supra-national political actor; to embed the (digital) market by narrowing down regulatory focus to identify particularly dangerous systems dubbed ‘high risk AI’; to introduce non-discrimination processes and other fundamental rights protection to thwart double-edged attempts to undermine emancipation undertaken by both the market and the nation state – thus justifying the counterbalancing role of the EU.
Tellingly, the introduction to the Act directly refers to the political ambitions of the EU: the ultimate aim of the actions undertaken by the EU legislator is to amplify the Brussels effect, that is to say to ‘protect the Union's digital sovereignty and leverage its tools and regulatory powers to shape global rules and standards’ (Commission, 2021b: 6).
Enhancing Market Integration by Limiting the State
The key assumption of the AI Act is that the widespread adoption of digital technologies such as AI is socially and economically beneficial for companies and consumers. Companies get access to optimization of their production patterns and better prediction of market trends, while consumers are satisfied with personalized products. But digital markets need specific conditions to thrive. The efforts to forcefully embed them within the national context by way of particularistic regulations may destroy their comparative advantage built through the network effects of scale. This defining trait of digital markets is used as ample justification for extending the EU's founding principle of market integration to the new digital realm. Only the European-wide market is deemed sufficiently large by the Act to support the business model of tech companies. Therefore, nation states must be actively prevented ‘from imposing restrictions on the development, marketing and use of AI systems’ (Commission, 2021b: recital 1). To this end, the EU needs to guarantee that the Member States ‘will see no reason to take unilateral action that could fragment the single market’ (Commission, 2021b: 10).
The corollary and fortifying argument is that the states are simply unable to control AI systems and technologies on their own, due to ‘the nature of AI, which often relies on large and varied datasets and which may be embedded in any product or service circulating freely within the internal market’ (Commission, 2021b: 6). In Polanyian terms, AI technologies are thus presented as intrinsically disembedded as they are characterized by ‘opacity, complexity, bias, a certain degree of unpredictability and partially autonomous behaviour’ (Commission, 2021b: 2). They may easily breach social norms and legal rules, endanger the safety and fundamental rights of the citizens (see Commission, 2021b: 6). To both enjoy the benefits and control the many risks and threats brought about by the adoption of AI it is necessary to implement EU-wide rules, the only ones able to control AI systems.
Against this background, the EU takes on the role of a market-enhancing power, using common regulations to ensure that, under a digital single market, virtual borders will cease to exist to everybody's benefit. In the proposal of the AI Act, the EU is presented as the only guardian of ‘legal certainty’ (Commission, 2021b: recitals 2, 6, 57, 72), content with introducing the ‘minimum necessary requirements’ for the markets (Commission, 2021b: 3). However, it also takes up the role of the singular protector ‘of overriding reasons of public interest and of rights of persons throughout the internal market’ (Commission, 2021b: recital 2). This shows that the AI Act is supposed to strike a balance between enhancing market integration and ensuring social protection: ‘uniform obligations for operators’ underpin ‘uniform protection’ for people (Commission, 2021b: recital 2). Thus, the political actor, in line with the traditional Polanyian approach, simultaneously constructs beneficial conditions for the market and safeguards a wide catalogue of rights.
Shaping the Market by Defining AI and Setting Standards for its Uses
In the digital economy, definitional murkiness underpins disembeddedness. Technology companies, disrupting traditional markets and extending commodification through datafication, routinely capitalize on definitional loopholes in regulations concerning privacy in the context of personal data. This is why the discussions on controlling the abuse of technologies tend to focus on definitions. This trait is also present in the draft of the AI Act. The EU legislators recognize that ‘a single future-proof definition of AI’ (Commission, 2021b: 3) is instrumental to embedding the operations of digital actors as it will consolidate a legal framework that is both ‘innovation-friendly’ and ‘immune to disruption’ (Commission, 2021b: recital 72). It has been assumed that legal certainty will support the markets, and at the same time it will protect the rights of citizens by identifying risks and threats connected with new technologies.
If adopted, the definition would be one of the main elements illustrating the market-shaping power of the EU law, as it would clearly indicate systems subject to proposed requirements. Yet, proposing such a definition proves a formidable task, as there is no academic or expert community consensus on what exactly AI actually is. The definition put forward in the AI Act reflects these difficulties: an AI system means software that is developed with one or more of the techniques and approaches predefined in an annex. 6 Such software must be able to generate outputs (e.g., predictions, recommendations) for a given set of human-defined objectives. The generated outputs must be able to influence the environments they interact with (see Commission, 2021b: Art 3(1)). The broad catalogue of the proposed techniques and approaches may result in almost every software being considered as AI, which hardly allows for holding the globalized digital providers accountable and addressing the specific challenges linked to the use of AI.
Thus, in relation to shaping the requirements towards AI systems, the EU legislator has chosen a functional approach. The AI Act focuses on the operational effects of AI-based systems in differentiating between three kinds of technologies: firstly, such that should be expressly prohibited as particularly dangerous for EU citizens. Secondly, such that should be closely monitored as engendering a ‘high risk.’ Thirdly, such that should observe certain kinds of transparency obligations. These requirements are the most significant attempts of shaping the digital market in order to ensure the standards of social rights protection (e.g., prohibiting the use of the systems resembling the Social Credit System in the public sector), as well as emancipation (e.g., the requirements concerning high-risk uses of AI systems regarding their compliance with fundamental rights protection). In this sub-section, we provide a detailed overview of these requirements.
There are four uses of AI that the Act prohibits. Firstly, the Act forbids the use of any AI system that ‘deploys subliminal techniques beyond a person's consciousness in order to materially distort a person's behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm’ (Commission, 2021b: Art 5(1)(a)). Here, the EU legislator sets out to defend emancipation, understood as freedom to act on the basis of an autonomously formed worldview. Still, the terminology proposed in these paragraphs leaves much space for interpretation, e.g., what should qualify as ‘subliminal techniques’? How to assess the likelihood of causing harm? No wonder such wording has been heavily criticized, e.g., by human-rights organizations (European Digital Rights (EDRi), 2021b: 8) and by independent expert bodies. For example, in their joint opinion on the AI Act, the European Data Protection Supervisor and European Data Protection Board noted that ‘the criteria referred to under Article 5 to “qualify” the AI systems as prohibited limit the scope of the prohibition to such an extent that it could turn out to be meaningless in practice’ (European Data Protection Board and European Data Protection Supervisor, 2021: 10). Secondly, the Act also prohibits such systems if they, additionally, ‘exploit any of the vulnerabilities of a specific group of persons’ (due to their, e.g., age or mental health – Commission, 2021b: Art 5(1)(b)). This focus on protecting vulnerable groups is in line with the broad approach that the EU adopted towards discrimination. However, the scope of the term “vulnerabilities” covers a smaller number of protected characteristics such that the prohibition of discrimination as defined in the EU law. Moreover, while it aims to prevent discriminatory treatment, its wording also remains unclear, which will limit its potential to prevent such treatment.
The EU legislators do not turn a blind eye to the potential abuses of fundamental rights that could be perpetrated by the public sector as, thirdly, the Act also prohibits certain AI systems from being implemented for the purposes of law enforcement, if they ‘are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person's liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter’ (Commission, 2021b: Art 5(1)(d)). This shows that the proposal attempts to limit the negative impact of the AI systems which may have serious consequences for fundamental rights’ protection (e.g., discriminatory profiling). However, here again, numerous NGOs noted that the provision foresees several exceptions, which undermines its prohibitive character. Fourthly, prohibited uses include systems that resemble the Chinese Social Credit System in aiming at ‘evaluation or classification of the trustworthiness of natural persons […] based on their social behaviour or known or predicted personal or personality characteristics’. Such algorithmic systems can impose unfavourable treatment, barring individuals from participation in economic, social, and political life, thus directly circumscribing emancipation. 7
The scope of the proposed solutions is significant. By including AI systems implemented in both the private and the public sector, the EU is positioning itself as a political entity that protects its citizens from both the ‘satanic mill’ (Polanyi, 2001: 35) of marketization, and the authoritarian practices that endanger citizens’ freedoms due to the use of AI systems by the state. However, the AI Act sets such a high threshold for AI systems to be prohibited that it is almost impossible to identify any existing system which would be forbidden on their basis. As noted by Michael Veale and Frederik Zuiderveen Borgesius: ‘the prohibitions concerning manipulative AI systems may have little practical impact’ (Veale and Zuiderveen Borgesius, 2021: 100).
Thus, it is the category of high-risk AI systems that in practice might become more relevant for ensuring social protection and emancipation helped by the market-shaping approach. AI systems applied in areas that require ‘a consistent and high level of protection of public interests as regards health, safety and fundamental rights’ (Commission, 2021b: recital 13) could be considered as high-risk. The areas proposed by the Commission in the AI Act include, e.g., education and vocational training, employment, workers management, and access to self-employment, or access to and enjoyment of essential private services and public services and benefits (Commission, 2021b: Annex III). For example, the AI Act directly addresses the issue of the datafication of work and sets out to protect the rights of workers whose recruitment, job performance and career trajectory are becoming increasingly controlled through data surveillance and determined through automation. The enumerated specific uses of high-risk AI in employment include systems intended to be used ‘for recruitment or selection of natural persons’ and ‘for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships’ (Commission, 2021b). The EU expressly assumes the role of protector of the social rights of workers, threatened by the application of AI systems.
Moreover, the Commission retains the right to widen the list of high-risk areas on the basis of the threat they pose to fundamental rights. 8 One of the criteria which would be taken into account in such a situation is ‘the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular, due to an imbalance of power, knowledge, economic or social circumstances, or age’ (Commission, 2021b: Art 7(2)(f)). Interestingly, the wording of this provision exceeds the grounds for protection that are included in the Charter. Emphasizing such factors as ‘imbalance of power’ or ‘knowledge’ is an innovative – though blurry and lacking normative content – approach to the regulation of technologies. In recognition of the challenges linked to the implementation of AI-based tools, the EU legislator has assumed the role of an innovative, normative leader in terms of supporting social protection in the areas impacted by the implementation of AI systems.
The Act directly lays out the responsibilities of AI providers to, e.g., control data quality and ensure the accuracy and robustness of these systems (respectively: Commission, 2021b: Art 10 and Art 15). These requirements would force AI system providers to scrutinize the solutions which they develop in more detail, which is an example of a market-shaping strategy. However, the substantive norms concerning high-risk AI systems first and foremost show the interplay between the protection of social rights and emancipation, which is specific to the EU's attempts to create a strategy regulating new technologies.
Emphasizing Emancipation
The previous sections focused on tracing the first and second thread of the European integration project (enhancing market and shaping it). Interestingly, the third thread – prohibiting discrimination – is directly or indirectly present in most recitations of the Act. For example, it informs the bulk of requirements adopted in the Act in relation to high-risk AI.
The need to combat the challenges that AI raises in the area of non-discrimination is underscored in an exploratory memorandum accompanying the proposed act, which complements existing regulation by adding ‘specific requirements that aim to minimize the risk of algorithmic discrimination’ (Commission, 2021b: 4). The focus on strengthening non-discrimination is particularly visible in Art 10 which concerns data and data governance. It obliges the providers of high-risk AI systems to examine the process of training, validating, and testing data sets towards possible biases in data (Commission, 2021b: Art 10(2)(f)) and data gaps or shortcomings (Commission, 2021b: Art 10(2)(g)). The providers of high-risk AI systems should take into account appropriate statistical properties as regards the persons, or groups of persons, on which the high-risk AI system is intended to be used (Commission, 2021b: Art 10(3)), as well as characteristics or elements that are particular to the specific geographical, behavioural or functional setting (Commission, 2021b: Art 10(4)). 9 Any AI system should provide information on its performance as regards the persons, or groups of persons, on which the system is intended to be used (Commission, 2021b: Art 13(3)(b)(iv). When perceived from the perspective of Fraser's framework, such an approach can be – with a caveat discussed in more detail below – interpreted as conducive to emancipation. Groups subjected to discriminatory treatment are invested with a certain level of protection against such discrimination being replicated or enhanced by AI systems.
The EU legislator aims at forcing AI providers to develop solutions with the non-discrimination standard in mind. Curiously, the provisions routinely use a notion of bias, associated with technological discourse, instead of a term with more established legal definition in EU law (discrimination). The term ‘bias’ is not defined in the proposal, and there is no explanation concerning what kind of criteria should be used when assessing if a particular AI system is biased. It may seem like a peculiar choice in light of the above-mentioned focus on defining and categorizing AI and AI systems, but it might be explained by the role the AI Act plays in the European political strategy. Alongside establishing legal standards concerning AI systems, it also serves as a tool to consolidate the EU's role as a normative leader in the area of AI regulation. Technological vocabulary serves as a tool for forming a techno-legal discourse that will shape the EU's position in regulating new technologies. The EU is ostentatiously paving ground in the area of protection of citizen's rights, their non-discrimination in the digital economy by setting standards concerning high-risk AI systems and data quality issues, including the issue of bias in data. To this end, it is building a regulatory framework that will combine elements typical for social protection and elements typical for the emancipation axis of the triple movement.
Discussion: a Fictitious Triple Movement?
Analysing legal norms, particularly when inscribed in a draft EU regulation, can be exacting, and we hope that the reader bore with us up to this moment. The aim of the analysis was to follow the traces of the three strands of the European integration project in the AI Act proposal in light of Fraser's notion of triple movement. Additionally, we were interested in the manner in which the EU positions itself as the only actor able to adequately regulate the digital market. The legitimizing argument may be reconstructed thus: in the digital economy markets are increasingly disembedded and at the same time have much more leverage at commodification, stripping individuals of their social rights as well as the right to participate in social life without discrimination. Still, they produce beneficial goods and services, and they should not be harmed by the particularistic curbing policies of the individual states. Additionally – here the recital of the AI Act seemingly goes hand in hand with Fraser's argument –the state can also use digital technologies to subjugate its citizens. Only the EU has the ability to simultaneously tame and support the market, as well as check the putative authoritarian ambitions of the state. Most importantly, it can safeguard the emancipation of European citizens.
Our analysis of the AI Act proposal shows that the EU has found renewed validation for its political legitimacy as well as anchorage for its political identity in balancing the triple movement in the context of the digital economy. While market-enhancing efforts are inscribed in the idea of economic integration, it is the market-shaping aspect that is the crucial element in the EU's attempts to become more than just an economic organization. The EU is trying to play the role of an enabler for the free market, while also presenting itself as a defender of fundamental values, consistently using references to fundamental rights to emphasize its unique political stance: A solid European regulatory framework for trustworthy AI will also ensure a level playing field and protect all people, while strengthening Europe's competitiveness and industrial basis in AI (Commission, 2021b: 6).
However, while its new regulatory framework concerning AI aims to include certain areas which are directly linked to social protection understood in the Polanyian sense (e.g., education, employment, public services), an important point of interest seems to be issues linked to the prohibition of discrimination. Thus, it has headed in the direction of the third element of Fraser's triple movement: emancipation. The EU as a political institution recognizes that emancipation may be endangered not only by the market but also by the state and therefore sets out to control both sides of the Polanyian equation. In the EU's approach, emancipation rights have become pivotal values in relations not only with the market but also with the state. Thus, the Act reads as if its authors took into consideration the detailed analysis of Shoshana Zuboff (Zuboff, 2019) suggesting that both market surveillance (commodification of personal data) and the state's datafied surveillance are to be controlled.
Having said that, there is one substantial problem with this interpretation. Under the proposed AI Act the enforcement of anti-discriminatory measures in the digital market lies in the main part with the Commission, which exposes the fictitious character of the EU's version of triple movement. The proposal lacks regulatory measures that would give individuals subjected to decisions made with AI systems new rights or means to enforce their rights in the context of AI deployment. The only provision of this type is Art 52 which refers to transparency obligations for certain AI systems. It obliges not only providers to inform natural persons that they are interacting with an AI system, but also users of AI systems to inform natural persons if emotion recognition or biometric categorization systems are being operated, as well as about the fact that natural persons are exposed to content which is a deep fake (Commission, 2021b: Art 52). Indirectly, the attempt to address the citizens’ lack of knowledge of AI is the obligatory registration of such systems in a public database (Commission, 2021b: Art 51 and 60) and the enumeration of the types of information that should be provided about such systems (Commission, 2021b: Annex VIII). However, these solutions are not accompanied by provisions that would actually empower the citizens to act in order to enforce them. The Commission's proposal also does not include provisions that would focus on providing the individuals or social partners with accessible solutions to scrutinize the impact that AI implementation may have on, e.g., determining the treatment to which they are subjected during recruitment procedures or when accessing social services.
Both social protection and emancipation are approached as issues that should be resolved between the EU's institutions, Member States’ institutions, and tech companies. Firstly, the Act's provisions mostly focus on providing national authorities with the competences to oversee AI's implementation (Commission, 2021b: Art 23). Secondly, when they foresee solutions regarding access to information about high-risk AI systems, it is either ensured between the providers, importers, and distributors and national institutions, or between the providers, importers, and distributors and the users of such systems (Commission, 2021b: Art 13). The groups that are left almost entirely outside the scope of the proposed framework are the general public and civil society bodies such as unions or non-governmental organizations, which is highly problematic from the perspective of social protection, as presented by Polanyi. As mentioned above, in Polanyi's analysis, it is society that pushes the state to adopt measures to bolster social protection. Similarly, the bodies whose importance is highlighted in Fraser's work, such as civil society organizations, are not granted any rights that would enable them to enforce the provisions meant to serve emancipation. The EU fails to facilitate the involvement of citizens by almost entirely omitting organizations representing the society, as well as ordinary citizens, from the scope of the proposed solutions.
In contrast to this approach, we argue that the systemic approach to AI systems and their examination by specialized agencies should be perceived as one of many ways in which social protection and emancipation could be strengthened in the context of the digital environment. The Commission's proposal exclusively follows a path that provides Member States and EU institutions with the possibilities to oversee AI systems, e.g., by defining those AI systems which should be considered high-risk or by examining the bias which may be present in the implemented solutions. Coming back to Fraser's framework: this type of regulation is ‘organized in a bureaucratic-étatist manner, which disempowers its beneficiaries, whom it treats less as active citizens than as passive consumers’ (Fraser, 2011: 149). Thus, even though in terms of substantive aspects of the proposed regulation it may support emancipation (e.g., by countering biases in data), in terms of the procedural aspects it does not enable civil society organizations and the citizens to enforce these solutions.
Moreover, it solidifies the EU's – or to be more specific, the Commission's – position as the main actor responsible for setting standards in this area. This can be illustrated by the material scope of the regulation, which – if adopted in the current form proposed by the Commission – would ‘rule out the possibility that the Draft AI Act is a general “minimum harmonization” instrument, setting a horizontal regulatory floor’ (Veale and Zuiderveen Borgesius, 2021: 109). Thus, Member States would not be allowed to adopt higher standards of protection regarding the matters addressed by the regulation. This shows that the priority for the EU is to strengthen its position as a rule-maker in the area of digital technologies. The EU equips itself (and to a certain extent Member States) with regulatory solutions that will enable the enforcement of rules concerning social protection and emancipation in line with the EU's interests – as understood by the Commission, but not necessarily by EU citizens.
It is important to stress that civil society organizations specializing in the area of digital rights call for including more emancipatory solutions in this regulation. In a statement issued by EDRi and 119 civil society organizations, the following proposal is put forward: ‘Include a right to an effective remedy for those whose rights under the Regulation have been infringed as a result of the putting into service of an AI system. This remedy should be accessible for both individuals and collectives’ (EDRi, 2021a: 5). It remains to be seen if in the final version of the act, the voices of civil society will be taken into account and the area of AI regulation will become, to paraphrase Fraser, a testing ground through which the norms infusing (emancipatory) social protection would be forced to pass (Fraser, 2011: 148). Nevertheless, one thing is certain: another kind of top-down political solutions, even if consonant with the blueprint of European integration, will not countermand the threat to emancipation brought about by ubiquitous commodification.
Conclusions
Our analysis shows how the EU is claiming for itself the role of an actor with the right to set pro-emancipatory and pro-social-protection rules for digital markets, while also ensuring its leading position in terms of defining what emancipation and social protection mean and how they shall be enforced in the context of the digital environment
The analysis of the proposal demonstrates that the EU seems to be using the opportunity of regulating AI mostly to solidify its position as a rule maker in the area of new technologies. The lack of definitions for certain terms that are crucial for tackling the discriminatory mechanisms which may characterize AI uses (‘imbalance of power’, ‘bias’) shows that it seems more important to use certain buzz words than to tackle the issues that they describe. The high threshold for AI systems to be prohibited indicates that the priority is to create an appearance of market-shaping rather than to shift the balance between market-enhancing and -shaping in the direction of emancipation. The lack of solutions to provide EU citizens with the power to directly enforce their rights (linked to the uses of AI systems) shows that the EU's triple movement should be deemed fictitious: the one who will become more ‘emancipated’ is essentially the EU.
The EU is trying to forge for itself a new normative role when it comes to shaping the market via regulations addressing the digital economy. Ultimately, the way the EU's regulatory proposals balance marketization, social protection, and emancipation, is a matter of interest not only for the EU itself, as the norms adopted by the EU often become a template that is followed by other states and organizations. When analysing the triple movement in the context of digitalization – and considering the role that the law plays in this process – it is necessary to focus on the EU as an important actor that attempts to develop regulatory standards in this area. The existing regulations are not able to tackle qualitatively ambiguous and quantitatively overwhelming issues, such as the need to reflect the representation of properties displayed by certain groups within datasets, or counter algorithmic discrimination. New laws are needed. The fact that AI is radically disembedded justifies the political engagement of the EU, as a rule-maker tackling challenges arising from untamed marketization.
Representing a market consisting of 27 Member States puts the EU in a more suitable position to address the challenges created by global Big Tech companies, and thus creating new ground for the Brussels effect. However, with great market comes great responsibility. The inclusion of genuine emancipatory measures equipping citizens and social organizations with instruments allowing them to protect and claim their rights, which are being threatened by the development of AI systems, is a necessary step for the EU's triple movement to cease being fictitious.
Footnotes
Acknowledgements
The authors would like to thank the participants in the seminar at DELab UW who provided valuable feedback.
Notes on the contributors
Renata Włoch is an Associate Professor at the Faculty of Sociology at the University of Warsaw and Co-Director of the Digital Economy Lab research center at the University of Warsaw.
Joanna Mazur is an Assistant Professor at the Faculty of Management at the University of Warsaw.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Joanna Mazur is supported by the Foundation for Polish Science (FNP).
