Abstract
The integration of artificial intelligence (AI) systems in the workplace is reshaping labour processes by automating managerial functions such as task assignment, supervision and evaluation, deepening power imbalances and creating new worker vulnerabilities. Platforms are central to this debate, remaining key sites where algorithms structure and control work. In response, the EU introduced its first comprehensive AI regulatory framework in 2024: the AI Act and the Platform Work Directive (PWD). These initiatives are expected to play a decisive role in shaping platform governance, while the AI Act also extends more broadly to algorithmically managed workplaces. However, the AI Act allows industry self-assessment, limits oversight and provides minimal worker protection. By contrast, the PWD addresses algorithmic risks more robustly, including correct worker classification and greater transparency, though its scope is limited to platform workers. This creates a fragmented framework: broad but weak protections under the AI Act versus stronger but narrower rights under the PWD. This article critically assesses the EU's regulatory framework, arguing that dominant approaches reduce worker participation to symbolic gestures, based on an assumed harmony between labour and capital. As an alternative, it advances a worker-centred perspective grounded in the recognition of conflict and the view of technology as a contested space. This approach offers an alternative understanding of platform governance while simultaneously aiming to strengthen workers and trade unions across sectors by broadening the scope of transparency information rights, veto powers, and collective bargaining cut in half keeping reference to platform governance as a key part of the debate.
Introduction
The impact of artificial intelligence (AI) systems in the workplace is now widely recognised, extending beyond platform labour into sectors such as supply chain logistics and retail. Algorithmic management is reshaping labour processes by partially automating managerial functions. Algorithms increasingly handle task assignment, supervision, evaluation and even pay, adding flexibility to previously stable aspects of work. Τhis shift exacerbates existing power and information asymmetries while creating new risks through heightened flexibility and precarity. Conflicts on digital platforms underscore the central role of algorithms in capital–labour relations.
These risks have been acknowledged by scholars, civil society organisations and institutional actors at national and transnational levels, prompting intense debate over regulatory responses. This debate is reflected in two major EU instruments adopted in 2024: the AI Act and the Platform Work Directive (PWD). Examining the new EU framework is crucial, as the AI Act represents the first comprehensive, legally binding attempt to regulate AI through a risk-based approach. It sets rules for the design and deployment of AI systems across sectors, including public services and the workplace. The PWD, though not directly applicable, is equally important. It requires member states to address platform labour risks and encourages a rethinking of workers’ rights in relation to algorithmic management. At the same time, the EU seeks to position itself as a global regulatory leader, presenting its framework as a ‘gold standard’ with international reach.
This article employs a threefold methodological approach integrating doctrinal, comparative, and normative legal analysis. At a doctrinal level, it examines the AI Act and the PWD, interpreting specific provisions and assessing their legal implications within the wider context of EU labour regulation. For the AI Act, the analysis is deliberately limited to provisions that explicitly or implicitly address AI in the workplace, rather than the regulation as a whole. This is complemented by a comparative analysis highlighting divergences between the two instruments: the AI Act adopts a sector-wide scope but offers only minimal safeguards, while the PWD provides stronger rights but applies solely to platform workers. This juxtaposition exposes the risk of regulatory fragmentation in governing algorithmic management. Finally, the article develops a normative argument grounded in a worker-centred perspective. Drawing on labour process theory and critical labour law, it contends that workers and trade unions should be recognised as active regulatory actors. This stance underpins the call for enhanced information rights, veto powers, and collective bargaining as essential to a more democratic and equitable regulatory framework.
This article is structured as follows: the first section provides a brief overview of the risks linked to algorithmic management and outlines the need for regulatory measures and deeper engagement with workers and their representatives. The second section examines the AI Act, highlighting its contradictions and limitations. The third section critically analyses the PWD, assessing its provisions and potential implications. The fourth section presents a worker-centred alternative to AI regulation, emphasising its break from tokenistic participation and exploring how trade unions might leverage elements of the existing framework. Finally, the fifth section considers the potential for ongoing collaboration between academic research and workers’ movements, while emphasising how workers’ struggles can point to alternative approaches to AI governance.
Risks of algorithmic management and the need for regulation
Algorithms are already embedded in workplaces, supervising workers, assigning tasks, setting work rates and pay, and ranking performance – often in visible and invisible ways. In this sense, the much-discussed ‘future of work’ (Adams-Prassl, 2019) is already a present reality. Algorithms are driving a major restructuring of work organisation, foregrounding ‘new management models’ rooted in ‘hyper-connectedness’ where data plays a central role in decision making (Ponce Del Castillo, 2018: 4). ‘Algorithmic management’ refers to a process that seeks to ‘partially automate labour process supervision and coordination’ (Cant, 2020: 13). Examining algorithmic management and the qualitative transformation of the labour process challenges dominant techno-determinist narratives around AI, which focus narrowly on quantitative workforce shifts and the imperative of upskilling (De Stefano, 2019).
Algorithmic management is prevalent among platforms (Cant, 2020; Woodcock, 2021), where managerial labour is limited and task allocation, rankings, payment, contract termination and other aspects of work organisation are directed through algorithms. In minimising labour costs, platforms evade the ‘burden’ of complying with labour law (Srnicek, 2017) by misclassifying workers as ‘independent contractors’ or ‘partners’, creating tensions that result in strikes and mobilisation (Tassinari and Maccarrone, 2017; Woodcock, 2021). Algorithms are not directly involved, but platforms use algorithmic management to support the claim that there is no direct supervision (and thus no subordination). This links algorithms to questions of proper legal classification.
This structural reliance on algorithmic management not only facilitates regulatory evasion but also transforms work experience, as platforms embed opaque data-driven systems that shape workers’ daily realities in ways difficult to contest or comprehend. These systems operate as ‘black boxes’, concealing the rationale behind rankings, task allocations or dismissals from workers and their representatives (Gegenhuber et al., 2021; Rosenblat and Stark, 2016). This opacity is underpinned by structural information asymmetry that frustrates appeals and deepens conflict. In platform settings, pay is also obscured – shaped by fluctuating demand, customer profiles and other hidden factors – leaving workers unable to predict their income (Cant, 2020; Wood et al., 2019). As Jarrett (2022: 53) notes, ‘in this opacity, actual incomes become impossible to predict’. This reinforces precarity and worker vulnerability. Opacity is not merely technical but political: it mystifies the labour process and alienates workers. As Aloisi and Gramano (2019: 112) comment, the ‘black box's’ purpose is ‘to keep most workers in the darkness as regards strategies, which, although partially autonomous, answer to specific organisational needs and reflect managerial choices’.
More importantly, algorithmic management is expanding beyond platform labour, transforming labour processes across sectors (Gaudio, 2024). A 2025 OECD survey of over 6000 firms shows widespread and growing adoption – especially in the United States and parts of Europe – alongside managerial concerns over accountability, transparency and worker wellbeing (Milanez et al., 2025). Similarly, the European Commission (2025) notes that, although detailed data on affected workers in the EU-27 is lacking, up to a quarter of companies were already using such systems by 2023, mainly for surveillance and evaluation, with annual deployment growth projected at 3% to 6% (European Commission, 2025: 6).
At the heart of current debates lies algorithmic surveillance, which intensifies managerial control and raises urgent social and ethical concerns. Surveillance is a core element of algorithmic management (Aloisi and Gramano, 2019), both a prerequisite for algorithmic operation and a consequence of its use. Once logged in, workers are continuously monitored for location, performance, and activity, providing data for productivity estimation and task allocation (De Stefano and Taes, 2021). Through wearable devices and computer-based algorithms, blue- and white-collar workers face constant monitoring (Aloisi and Gramano, 2019; De Stefano, 2019), automated evaluations, and digital barriers to union activity (De Stefano, 2020: 432–434). Hertel-Fernandez (2024) reports that over two-thirds of US workplaces now use AI-powered surveillance, linked to heightened anxiety, unsafe work speeds and increased injuries.
The influence of algorithmic surveillance in logistics is particularly stark. At Amazon Fulfillment Centers, workers’ movements are continuously tracked, with algorithms assessing errors and enforcing productivity targets. These targets become increasingly unattainable, as algorithms – driven by profit – accelerate production to match demand shifts. This intensification generates constant stress and anxiety. As Kaoosji (2020: 195) notes, ‘there is no way to safely make these rates’, and failure results in warnings – ‘after three write-ups, workers face termination’. Unsurprisingly, ‘existing automation, algorithmic control and scale of facilities creates highly stressful and insecure work environments’ (Struna and Reese, 2020: 92). A similar dynamic affects platform workers, who engage in ‘self-imposed work intensification’ (Cant, 2020: 55), often prompted by algorithmic surge pricing – even in extreme weather.
Finally, while the role of human managers appears weakened through the ‘augmenting and eventually replacing [of] human day-to-day control over the workplace’ (Adams-Prassl, 2019: 124), managerial prerogative is ultimately reinforced, as decisions are ‘naturalised’ (Massimo, 2020: 130) and take on ‘the appearance of an objective and unquestionable procedure’ (Klengel and Wenckebach, 2021: 159).
From a legal and regulatory perspective, algorithmic management has been recognised as a major challenge for labour law and workers’ rights (Adams-Prassl, 2019; De Stefano and Taes, 2021; Ponce Del Castillo, 2018). Scholars and institutions (Cazes, 2023; OECD, 2024) warn that, unless addressed, labour law risks obsolescence, unable to guarantee basic protections. Klengel and Wenckebach (2021: 167) stress that workers and their representatives must access datasets used to train AI systems, and those for task allocation, evaluation, promotion or redundancy decisions. Similarly, Todolí-Signes (2021) advocates dual empowerment: a ‘right to participate in the assessment of an algorithm’ and a ‘right to ask for an OHS-related modification or improvement’, which companies must implement or justify refusal.
A strengthened system of collective bargaining – extended to cover algorithmic management (Cazes, 2023) – could establish agreements ensuring redress mechanisms for workers (Gaudio, 2024), greater transparency in decision making (De Stefano and Taes, 2021), stronger health and safety protections (Todolí-Signes, 2021), data rights for workers and representatives (De Stefano and Taes, 2021; Ponce Del Castillo, 2018), and enforceable human-in-the-loop safeguards (De Stefano, 2019; OECD, 2024).
These concerns highlight the need for a new EU regulatory framework and the expansion of existing instruments – such as the General Data Protection Regulation (GDPR) – to address the risks of algorithmic management (Giorgi, 2024; Guglielmetti, 2024). Scholars stress the importance of linking collective agreements with legal frameworks to ensure regulatory measures empower workers to mitigate these risks (Cristofolini, 2024). In some cases, platform workers have successfully pursued legal action, securing rights such as sick leave or access to personal data revealing wrongful termination (Hertel-Fernandez, 2024; Gaudio, 2024). EU regulations like the GDPR and anti-discrimination Directives have proven valuable, prompting scholars to advocate a dual strategy of litigation and collective bargaining to protect and extend workers’ rights (Gaudio, 2024).
In the following sections (second and third) the AI Act (Regulation 2024/1689) and the PWD (Directive 2024/2831) will be examined, focusing on their potential impact on algorithmic management and their respective limitations.
AI act: Background, key points and limitations
Background of the AI Act
The process leading to the AI Act began with the European Commission's White Paper on Artificial Intelligence, published in 2020, followed by European Parliament resolutions advocating a legislative approach (Madiega, 2024). In April 2021, the Commission introduced the first draft, aiming to establish a harmonised regulatory framework for AI across the EU. After deliberations and trilogue negotiations throughout 2023, the final text was approved by the European Parliament in March 2024, with 523 votes in favour, 46 against and 49 abstentions.
The AI Act introduces a comprehensive regulatory framework for AI, following a risk-based approach that classifies AI systems by the risk they pose to fundamental rights – minimal, limited, high or unacceptable – and sets corresponding compliance obligations. The EU has presented the Act as ‘the world's first comprehensive AI law’, 1 aiming to position itself as a global regulatory trend-setter. Present analysis focuses specifically on the Act's provisions concerning AI in the workplace.
The AI Act has been a focal point of debate, criticised both for overly restricting the AI industry and for providing minimal protection of fundamental rights (Madiega, 2024). Since the 2021 draft, the Act has been modified and, to some extent, improved regarding labour rights. For example, in 2021, no specific individual workers’ rights were identified, and the document set a minimum standard for AI products across the internal market without envisaging the prospect of ‘stricter protection by the member states’ (Klengel and Wenckebach, 2021: 166).
AI Act provisions for AI in the workplace
All articles and quotes in this and the next subsection refer to Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, laying down harmonised rules on AI (EU, 2024a).
Notably, the Act implicitly acknowledges that it serves as a minimum protection standard for AI in the workplace, as evident in ‘This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favourable to workers’.
Regarding AI classification in the workplace, the AI Act specifies that most relevant systems are categorised as high risk. (a) ‘AI systems intended to be used for the recruitment or selection of natural persons; (b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships’.
Regarding the process of risk assessment for high-risk AI systems,
Regarding workers’ information rights,
Finally, the AI Act includes a right to information against potentially discriminatory or harmful AI decision making. Although referring to ‘affected persons’, Article 86(1) clearly applies to AI systems in the workplace as it states that, ‘any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III…shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken’.
Critique of the AI Act
The Act introduces key provisions – the right to prior information for workers and their representatives, and the classification of workplace AI systems as high risk. Article 26(7) requires employers to inform workers before deploying high-risk AI, recognising trade unions and adding procedural transparency absent from earlier drafts (Cristofolini, 2024: 92). Together with Article 86(1), granting workers a right to be informed when subject to automated decision making, these provisions modestly improve information asymmetries in AI governance.
A similar assessment applies to classifying workplace AI systems as ‘high-risk’ (Chagny and Blanc, 2024). This triggers employer obligations under Articles 26 and 27, including adopting a risk management system, conducting a fundamental rights impact assessment, providing human oversight, and maintaining data quality standards. Employers must also retain system logs for at least 6 months.
However, the AI Act fails to address structural imbalances in algorithmic management. Its concessions to workers are largely symbolic, offering limited practical means to mitigate AI-related risks and mainly allowing employers to appear compliant.
Under the AI Act, employers deploying AI in the workplace can comply through self-assessment, allowing them to appear compliant without external verification. Scholars had already stressed the need for ‘independent, third party audits’ (Guglielmetti, 2024: 136), but their concerns were overlooked (Cristofolini, 2024; De Stefano and Taes, 2021). Reliance on self-assessment is ‘questionable due to the potential overreliance on providers’ self-governance, which neglects the information asymmetry in AI development, the power imbalances in the employment relationship and the lack of technical expertise of workers’ (Cristofolini, 2024: 87). Although developers cannot determine themselves if a system is ‘high risk’, self-assessment represents a major concession to AI providers, embodying a neoliberal ethos that prioritises the interests of the AI industry and employers using AI.
The second critique of the AI Act is that it does not ‘cover compliance with the labour law acquis nor oversight by labour inspectorates’ (Guglielmetti, 2024: 137). The omission of labour inspectorates reflects the circumvention of third-party audits and the bypassing of worker-led institutions. Compliance with labour law is not among the criteria that AI systems must meet to be considered compliant. In order to discern said criteria, it is necessary to draw from a number of articles.
In the introductory text, Recital 121 states that ‘standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation’. This is reflected in Article 17, which includes standardisation in the quality management plan for high-risk AI system providers, and in Article 40, which states that: High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards… shall be presumed to be in conformity with the requirements set out in Section 2
However, these organisations develop only technical standards and do not assess compliance with EU labour law or workplace-specific risks. The European Standardisation System has been criticised as ‘dominated by industry’ with an ‘important deficit in terms of representation’ (Giorgi, 2024: 117). Trade unions often lack the resources, expertise and access to influence EU standard-setting. The European Commission (2014: 102) acknowledges that ‘industry remains the core element of the European standardisation system, being the main standards user and, at the same time, leading the contribution to technical standardisation work’, while unions ‘have no financial interest in participating in standardisation and their representation is therefore not ensured through cost-benefit logic’. As Cristofolini (2024: 88) argues, delegating regulatory authority to industry-linked private entities is a ‘highly controversial practice’ that limits union influence and allows AI providers to sidestep the ‘high-risk’ classification by meeting mainly technical criteria.
The AI Act misrepresents AI's role in the workplace by treating it as a product sold from provider to employer. Yet AI is deeply embedded in the labour process – monitoring workers, automating decisions, assigning tasks, setting productivity targets, recruiting, evaluating, determining redundancies and even setting wages in platform work. Its issues are thus not only technical but
The ‘right of information’(
The EU recognises the importance of multi-stakeholder approaches to AI governance (European Commission, 2014). However, by limiting engagement to the right to be informed before AI deployment, employees remain passive participants. Even moderate calls for workers’ participation (Adams-Prassl, 2019; Cazes, 2023) appear unacknowledged in the AI Act.
Overall, the AI Act is best understood as a ‘baseline protection that will prevent the most dangerous systems – from a fundamental rights perspective – from entering the European market or only with safeguards in place’ (Hondrich and Mollen, 2024, 96). However, it provides inadequate protection against the multifaceted risks inherent in AI systems while the underlying premises of the Act (AI as a product, prominent role for the AI industry to set the standards) undermine effective governance.
In the next section, focus shifts to the PWD (EU, 2024b), its provisions and a critique of its potential impact.
Platform work directive: Background, key points and limitations
Background of the Platform Work Directive
In discussing the PWD, we refer to the Directive (EU) 2024/2831 of the European Parliament and of the Council of 23 October 2024 on improving working conditions in platform work (EU, 2024b).
Through the PWD, the European Parliament and Council recognised that unregulated platform labour can lead to surveillance, deepen power imbalances, obscure decision making, and threaten working conditions, health and safety, equal treatment, and privacy. This reflects ongoing mobilisation by platform workers across Europe – from Italy (Tassinari and Maccarrone, 2020) to France (Chagny and Blanc, 2024), Spain (Rodríguez Fernández, 2024), Greece (Minotakis and Faras, 2024; Tsardanidis, 2024) – and scholarly calls for robust regulation of digital platforms (Van Dijck et al., 2018) and platform labour (Aloisi and Gramano, 2019; De Stefano and Taes, 2021). In some cases, trade unions have secured collective agreements and national legislation mandating algorithmic transparency and worker consultation (Guaglianone, 2024; Rodríguez Fernández, 2024). Platform labour continues to expand, with the Council projecting 43 million EU platform workers by 2025 (Council of the EU, 2022).
In response to the impact of platforms on workers’ rights, the PWD addresses three key issues: (a) misclassification of workers as ‘independent contractors’ (Woodcock, 2021); (b) gaps in data protection, partly due to GDPR blind spots (Rainone and Aloisi, 2024) and (c) the opaque, ‘black box’ nature of algorithmic decision making, often producing unfair outcomes (Rosenblat and Stark, 2016). The following subsection outlines the Directive's key provisions.
Significant provisions of the Platform Work Directive
Firstly, in light of expanding protection to misclassified workers, the Directive broadens its scope of application to all ‘persons performing platform work’, which, according to ‘any individual performing platform work, irrespective of the nature of the contractual relationship or its designation by the parties involved’. ‘The contractual relationship between a digital labour platform and a person performing platform work through that platform shall be legally presumed to be an employment relationship where facts indicating direction and control…are found’.
Regarding workers’ personal data and related decisions, the PWD contains concrete measures.
Based on the right of information outlined above, ‘persons performing platform work have the right to obtain any explanation from the digital labour platform for any decision taken or supported by an automated decision-making system’.
Furthermore, under
These provisions establish the right to an advanced understanding of platform operations and workforce management; access to such data has so far been unavailable to scholars and public institutions, limiting their capacity. Similarly, ‘in proceedings concerning the provisions of this Directive, national courts or competent authorities are able to order the digital labour platform to disclose any relevant evidence…’
Finally,
These provisions share a common rationale: uncovering the ‘black box’ of platform labour is essential to empowering workers, trade unions and regulators. Collective bargaining is key in ensuring workers’ meaningful inclusion in AI regulation. The next section examines the PWD's progress in addressing algorithmic opacity while highlighting its limitations.
Critique of the Platform Work Directive
The PWD has been positively received by labour advocates (Countouris and Adams-Prassl, 2024; Rainone and Aloisi, 2024), partly because it addresses key tensions in a largely unregulated sector. Others, such as Durri et al. (2025), see the Directive as extending the GDPR's protective framework. A central issue is the lack of transparency in platform operations, which fosters opacity, hinders regulation and often infringes on workers’ privacy through intrusive data collection. Surveillance is pervasive, serving as the connective tissue of algorithmic management (Aloisi and Gramano, 2019; Cant, 2020).
By strengthening the right to information, the PWD directly challenges the black-box nature of algorithms, the structural precarity of platform work, and information asymmetry between labour and capital (Gegenhuber et al., 2021; Rosenblat and Stark, 2016). To contest algorithmic decisions, the context of data collection must be clear, including data origin and processing purpose (Ponce Del Castillo, 2018). The Directive provides a broad interpretation of information and explanation rights, beyond the GDPR, requiring platforms to disclose detailed information to workers and their representatives and extending protection to algorithmic ‘parameters’. This makes the PWD a potentially powerful tool for workers, unions and labour advocates challenging algorithmic decisions (Rainone and Aloisi, 2024).
The limits of the right to information were recently clarified by the CJEU in Case C-203/22 Dun & Bradstreet Austria (2025). The Court held that if a controller believes the requested information includes protected third-party data or trade secrets, it must submit it to the competent authority or court, which will balance competing rights to determine access. This landmark ruling frames trade secret protection as subject to judicial balancing against workers’ access rights, advancing a broader interpretation of the right to information and embedding it in the EU regulatory landscape. Its long-term effects, particularly regarding the PWD, remain to be seen.
The Directive, notably through
However, the PWD introduces some fragmentation in the rights it grants. Certain provisions do not apply to self-employed individuals (Rainone and Aloisi, 2024). Specifically, the right to support from representatives in monitoring algorithmic management, participation in occupational health and safety risk assessments, and the right to information and consultation before introducing or substantially modifying ADM systems are reserved for those with employee status (Durri et al., 2025: 93).
Beyond regulating algorithmic management, the Directive addresses platform worker classification. Its requirement for member states to establish a presumption of employment has been called a ‘groundbreaking innovation’ (Rainone and Aloisi, 2024: 3). Misclassification is a global concern (Woodcock, 2021), with platforms acting as ‘unreliable narrators’ by presenting themselves as tech firms (Van Dijck et al., 2018) and mislabelling workers as ‘independent contractors’ via subcontracting (Srnicek, 2017; Tassinari and Maccarrone, 2017). By introducing the presumption and shifting the burden of proof, the Directive focuses on worker subordination to algorithms (De Stefano, 2020), assessed through the factual context of direction and control.
Nonetheless, this provision has drawn criticism compared to the original proposal. The initial draft outlined five criteria, with meeting any two resulting in employee classification – described by Durri et al. (2025: 86) as ‘the most interventionist and worker-friendly one’. In contrast, Rainone and Aloisi (2024: 9) warn that the final version's broad discretion may allow member states to adopt a ‘qualified’ presumption with high thresholds or weak enforcement. Given the Directive's limited enforceability in individual cases, Countouris and Adams-Prassl (2024) remain sceptical, noting it leaves courts and legislators ‘juggling and coordinating a number of different personal scopes’ for platform worker rights. The shift towards a traditional understanding of employment (centred on an often outdated interpretation of control and direction by national case law) may, in practice, prove problematic, suggesting that the original version of the Directive, which outlined five criteria for establishing worker status, may have been more effective.
The high autonomy granted to member states is a double-edged sword, presenting both risks and opportunities for platform labour. As with the AI Act,
Concluding the second and third sections, regulatory fragmentation is a key concern. As Durri et al. (2025) note, although the PWD extends protections to the self-employed, some information rights remain exclusive to employees and their representatives. While the PWD marks progress in several areas (Rainone and Aloisi, 2024), its limited scope cannot fully address the AI Act's shortcomings. As algorithmic management spreads beyond platform work (Boewe and Schulten, 2020; European Commission, 2025), a coherent and comprehensive regulatory framework is urgent. The AI Act, though comprehensive across sectors, provides only baseline protections and significant leeway for industry self-regulation, leaving many workers with minimal rights and unions marginalised.
The following Table 1 provides a summary of the AI Act and the PWD by comparing their provisions on 5 issues that are crucial for governance of algorithms.
Comparison of the AI Act and the PWD.
AI: artificial intelligence; PWD: Platform Work Directive.
A worker-centred perspective to AI regulation
Towards a worker-centred perspective on technology in the workplace
The worker-centred perspective draws on two theoretical frameworks: labour process theory, situating new workplace technologies within ongoing labour–capital conflict, and critical labour law, focusing on legal impacts on workers’ rights.
Labour process theory stresses capital's need not only to purchase labour power but also to control and supervise workers to extract surplus value reliably (Braverman, 1998). Taylorism – and its digital forms today – is seen as capital's strategy to intensify labour, obscure decision making, and separate planning from execution, boosting productivity (Minotakis and Faras, 2024). This links new technologies to capital's response to worker resistance, using technology to suppress individual and collective struggles. Technology is thus not neutral, but ‘While it remains true that capitalists undoubtedly seek those technologies that are more profitable, we now must admit that there are several considerations that enter into the calculation of profitability. One is technical efficiency…another is the cost of the various inputs and the value of the outputs; yet a third is the extent to which any technology provides managers with leverage in transforming purchased labor power into labor actually done’ (Edwards, 1979, 112)
On the other hand, critical labour law approaches view the employer–employee relationship as fundamentally imbalanced, with new technologies often exacerbating this. This perspective recognises the transformative potential of algorithms (Adams-Prassl, 2019; Ponce Del Castillo, 2018) and highlights threats from increased surveillance, precarity and opaque decision making (Aloisi and Gramano, 2019; De Stefano, 2020), focusing on AI's qualitative impact on working conditions and the risk of deepening worker subordination.
Scholars highlight the insufficiency of existing labour law to protect workers against technological change (De Stefano, 2019, 2020; Ponce Del Castillo, 2018). There is a pressing need for updated, comprehensive worker protections at national and international levels (Chagny and Blanc, 2024; Gaudio, 2024). Collective agreements are increasingly seen as crucial in regulating algorithmic management, complementing and reinforcing legal protections (De Stefano and Taes, 2021; Rodríguez Fernández, 2024).
These two strands must be integrated to bridge sociological and legal perspectives. Critical labour law often underestimates organised workers’ role in shaping legal frameworks, portraying them as passive rather than active agents influencing technology deployment. Conversely, labour process theory may downplay labour law's relatively autonomous role and the wider impact of worker struggles on legislation. A worker-centred approach reconceptualises algorithmic management regulation as partly resulting from organised labour struggles and encourages dialogue between academics and trade unionists.
Finally, these theoretical traditions have been criticised for portraying the workplace primarily as a site of antagonism, neglecting potential cooperation between workers and employers in adopting new technologies. A further critique – mainly aimed at scholars such as Mueller (2021) – is that they interpret technology solely through the lens of resistance, overlooking possible positive outcomes. While the former critique stems from a different theoretical paradigm, the latter deserves reflection, though space limits full engagement. Importantly, it should be stressed that in most cases it is employers who integrate new technologies and reshape the labour process. In this context, attention must focus first on resistance and the development of a legal framework that can constrain managerial prerogative.
Moving beyond tokenism
To clarify a worker-centred perspective on AI regulation, it is necessary to critically assess the dominant multi-stakeholder approach. Supranational institutions, including the OECD (OECD, 2024) and the EU (EU, 2024a; European Commission, 2014), acknowledge that AI affects diverse social groups, requiring a regulatory framework reflecting their perspectives. While this departs from narrowly industry-focused models, it has significant limitations. Two key points merit attention.
First, multi-stakeholder approaches often assume convergence of interests, suggesting tensions can be resolved through dialogue and social partnership (Cazes, 2023). This perspective dismisses genuine conflict rooted in opposing social positions. Labour process theory (Braverman, 1998) and critical automation studies (Mueller, 2021) view technology and labour as a contested space where structural antagonisms unfold. Integration of technology often sharpens disputes over work intensity, task allocation and control. These conflicts, evident in algorithmic management (Cant, 2020; Kaoosji, 2020; Woodcock, 2021), cannot simply be attributed to lack of dialogue.
Furthermore, the current multi-stakeholder approach often reduces engagement with affected social groups to one-off consultations before major decisions. Advisory forums may be established before legislative initiatives, or workers’ representatives consulted ahead of AI integration in the workplace (as in Article 26(7) of the AI Act). However, once proposals become ‘hard law’ and technologies are deployed, mainstream approaches typically exclude workers and their collective bodies from ongoing involvement. Kaoosji (2020: 202) argues for continuous participation of affected communities and workers to counter entrenched power asymmetries in both the workplace and policy-making. Tokenistic engagement is inadequate, especially given constantly evolving, opaque technologies that deepen structural imbalances in employment.
Building on the existing framework
This is not to suggest that a worker-centred perspective dismisses the existing regulatory framework. Rather, it seeks to build on its most progressive, worker-oriented elements. Crucially, this includes incorporating litigation into trade union strategies (Gaudio, 2024) to enforce provisions and monitor employer violations. Given that the AI industry and deployers – such as platforms – have long operated in a largely unregulated environment, compliance with the AI Act or the Platform Work Directive cannot be assumed.
Some persistent points of conflict concerning AI in the workplace can now be partly addressed through incorporation of the PWD into national legislation or via the AI Act. AI systems monitoring both blue- and white-collar workers – often without their knowledge (Aloisi and Gramano, 2019) – must now be registered, and workers and their representatives informed of their use. Additionally, in algorithmically managed workplaces, sudden dismissals without explanation or recourse have become common over the past decade (Rosenblat and Stark, 2016; Struna and Reese, 2020). Under
Furthermore, the regulatory framework extends beyond EU regulations, as national laws have adapted to algorithmic management in response to workers’ resistance and policy constraints. For instance, Aloisi and Gramano (2019: 117) note that the French Civil Code restricts surveillance, requires prior notice to workers and their representatives, and demands justification for any surveillance-based restrictions. Collective agreements also function as regulatory tools, providing binding frameworks for AI systems and guiding digital policy. The agreement between food delivery platforms and trade unions in Spain (Rodríguez Fernández, 2024: 224–225) grants workers’ representatives access to ‘relevant information used by the algorithm… for organising delivery activity’, specifies prohibited data categories, and establishes a joint committee to oversee algorithm-related information. Finally, the 2025 CJEU ruling in
By disseminating knowledge about these ‘best practices’, we can revisit the notion of ‘AI literacy’. In current literature, the term is often framed narrowly, emphasising the need for workers and unions to grasp AI systems’ basic functioning and their fundamental rights (Cumbre, 2024; Klengel and Wenckebach, 2021). Yet, workers already interact with AI daily and possess practical awareness of the risks. What is now required is a deeper understanding of successful union strategies, existing regulatory frameworks, and how these can be mobilised to improve working conditions. This is not about ‘teaching’ AI literacy top-down, but fostering cooperation, exchange and mutual learning between workers and critical scholars – often within spaces of struggle and resistance.
In the next subsection, the key points and potential novel contributions of a worker-centred perspective will be presented.
Key points of a worker-centred perspective to AI regulation
Building on the analysis of the AI Act and the PWD, and on the broader literature on workers’ resistance to algorithmic management, we can now outline the key elements of a worker-centred approach to AI regulation. This perspective draws on critical legal and sociological insights while placing organised labour struggles at the heart of regulatory developments. (a) (b) (c) (d)
A worker-centred perspective on AI regulation must prioritise the needs and collective struggles of workers. These points offer a tentative, necessarily limited understanding of workers’ demands as expressed through their resistance. Advancing this perspective requires grounding research in ongoing and emerging struggles, closely examining workers’ demands and their understanding of algorithmic management.
Conclusion
The regulatory initiatives discussed here are poised to play a central role in shaping platform governance across Europe in the coming years. At the same time, intensified struggles by platform workers have highlighted the limitations of existing frameworks and provided the foundation for an alternative, worker-centred approach to algorithmic management. This emerging model emphasises the active participation of worker-led institutions, the protection of broad rights to information and transparency throughout the entire life cycle of AI systems, and the integration of collective bargaining and veto powers as mechanisms to ensure that algorithmic tools are deployed in ways that are accountable, fair, and aligned with workers’ interests.
Furthermore, this worker-centred approach extends beyond platform labour. Algorithmic management is increasingly widespread across sectors and countries (European Commission, 2025; Milanez et al., 2025), introducing new risks to workers’ rights by intensifying surveillance, deepening information asymmetries, accelerating work pace and fragmenting tasks. This reality calls for renewed engagement with labour process theory (Braverman, 1998; Edwards, 1979) and a critical reassessment of persistent Taylorist practices in contemporary workplaces (Minotakis and Faras, 2024; Woodcock, 2022). Studying real-world struggles against algorithmic management (Cant, 2020; Mueller, 2021) grounds understanding and promotes collaboration with workers and unions to unravel the ‘black boxes’ and demystify the ‘magic’ of Big Data and AI (Elish and boyd, 2018).
The role of regulation in this context should not be underestimated. Regulation that favours workers can play a crucial role in exposing algorithmic rationales and supporting the contestation of their outcomes. Critical labour law scholars have long underscored the importance of comprehensive, binding legal frameworks that strengthen workers’ rights to information, provide effective avenues for redress, and promote collective agreements as mechanisms to govern algorithms in the workplace (Aloisi and Gramano, 2019; De Stefano, 2020; Klengel and Wenckebach, 2021). Moreover, regulation can form part of a broader strategy that combines legal action with collective mobilisation to defend and advance workers’ rights (Gaudio, 2024). In this light, the EU's regulatory framework – particularly the AI Act and the PWD – represents a significant milestone, providing the first binding instruments for algorithmically managed workplaces. Yet, the international impact of this framework may diverge from what the EU initially envisioned.
Although often presented as a major regulatory advancement, the AI Act has significant shortcomings. It primarily treats AI as a product, subject to standardisation and harmonisation, overlooking its profound workplace implications. Rights granted to AI providers, including self-assessment of deployment risks, mean there is no provision for independent audits or labour inspectorate involvement. By contrast, the PWD offers a more grounded approach, aiming to secure correct employment classification and directly addressing the opaque, ‘black box’ nature of algorithms governing platform labour.
Ironically, platform workers – typically granted fewer protections – may now benefit from stronger safeguards than non-platform workers who are equally subject to algorithmic management. However, given the considerable discretion EU Directives grant to member states in implementation, there is a real risk that these enhanced rights may not materialise fully and uniformly across the Union. As a result, the EU regulatory framework risks collapsing into a minimal baseline, defined by the AI Act's limited provisions – restricted chiefly to prior notification and narrowly framed redress mechanisms.
These challenges reflect the deeper flaws of mainstream AI regulation, which often treats workers and their representatives as symbolic participants, assuming a false harmony between labour and capital. As the PWD and AI Act come into force in the coming years, these limitations will become increasingly evident. Ongoing research will be essential to expose these shortcomings as well as to identify potential gains for workers.
A worker-centred perspective, grounded in labour struggles, labour process theory and critical labour law, offers a path for reform. It seeks to protect and expand the limited gains of the current EU framework while addressing power imbalances through workplace democratisation, veto rights over new technologies, and collective bargaining as a precondition for AI deployment. Central to this vision is a strengthened right to information, giving workers and unions access to the data shaping their work. Achieving this depends on sustained collaboration between researchers, unions and labour advocates.
To strengthen a worker-centred perspective, future research should extend beyond the EU-level focus of this study. Workers’ struggles often emerge at national or sectoral levels as resistance to employers’ or managers’ attempts to implement AI systems in the workplace. These struggles also carry the germinal potential for a different model of AI governance, in which workers and their representative institutions play a central role, underpinned by full access to relevant data. National legislation may further reflect this potential by introducing more worker-favourable provisions (Guaglianone, 2024). This study's worker-centred perspective anticipates and seeks to foster such developments. Examining national frameworks and sectoral struggles can help identify potential alternatives, reveal how workers’ collective action shapes policy, and further expose the limitations of EU initiatives such as the PWD and the AI Act.
Finally, shifting beyond a Eurocentric lens is crucial for engaging with algorithmic governance in non-Western contexts, while drawing inspiration from workers’ demands on a global scale (Woodcock, 2021).
Footnotes
Funding
Present research and publication was funded by the UCD Newman Fellowship.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
