Abstract
The rapid evolution of Artificial Intelligence (AI) and algorithmic management (ARM) systems is transforming the workplace, raising significant debates about net employment, work organisation and working conditions. This article critically reviews key European policy aimed at regulating AI and ARM at work, as well as identifies gaps in the current policy framework. The European Union (EU) has adopted a distinct human-centric approach to AI, prioritising ethical and trustworthy design and use, rooted in its fundamental values. Though generally welcomed, the risk-based approach of the EU AI Act is insufficient, on its own, to protect workers from the potential harms of AI. Hence, specific additional employment legislation, either at the EU- or national levels, combined with collective bargaining at the sectoral and company levels, is needed to address the regulatory gap. Upskilling of social partners, including trade union officials, work council members and employers, is required so they obtain insights into the ‘black box’ of algorithms, and so they can have meaningful input into the design and deployment of AI systems.
Keywords
Introduction
The unprecedented pace at which Artificial Intelligence (AI) systems are advancing, along with adoption of algorithmic management (ARM), cloud computing and machine learning, has generated significant interest around the impact of AI on the future of work. It is widely accepted that the use of AI can bring significant benefits to the economy and workplace use is rising. This has led to debates about the impact of this recent phenomenon on net employment levels, the way that work is organised, and on working conditions. Despite this growing interest, practitioners and policymakers are only just beginning to grapple with how to balance the desire to foster innovation and productivity improvements with the need to protect workers from the potentially harmful aspects of AI.
Within this context, this article narrows the discussion from broad debates on digitalisation and the future of work to the specific challenges posed by AI-powered tools on the organisation of work. The article pursues two interrelated objectives. First, it provides an assessment of key European policies concerned with the regulation of ARM and AI at work. Particular attention is given to whether regulations address AI-driven monitoring and control systems, and the risks these systems can create for workers. Second, the analysis identifies a number of areas where improvements are needed in the EU policy framework governing work-related use of AI.
This focus on AI is timely because recent innovations in AI have seen an acceleration in uptake of AI. Additionally, regulation is typically viewed as a key counterbalance to the asymmetrical power and information generated by AI-driven monitoring and control systems. The European Union (EU) is often seen as at the forefront in the policy on AI. Clarifying the legal framework is essential to protect workers’ rights, ensure transparency, and promote fair treatment in an evolving employment landscape. The article makes a timely contribution by providing insights for policy makers and industrial relations practitioners concerned with job quality policies in the EU, and beyond.
What is AI and how is it being used in work settings?
While there are numerous definitions, the European AI Strategy defines AI as ‘systems that display intelligent behaviour by analysing their environments and taking actions – with some degree of autonomy – to achieve specific goals’ (European Commission, 2018: 1). According to the OECD, different AI systems vary in their levels of autonomy and adaptiveness after deployment (OECD, 2024: 4). While early forms of AI (known as expert systems) have been around for over four decades, newer types of AI have the capacity to ‘learn’ (to the extent that they can be fed the relevant training data), enabling the digitalisation of previously non-automatable processes (OECD/BCG/INSEAD, 2025).
Until recently, there was an emphasis on trying to anticipate how many jobs will be replaced by the introduction of new technologies, such as AI (see for example, Frey and Osborne, 2017; Nedelkoska and Quintini, 2018). Mainstream debates on automation, often driven by a techno-deterministic approach (Frey and Osborne, 2017), which put simply, is based on the idea that technology follows an inevitable and logical autonomous path independent of human intervention or social context (Bimber, 1990), assumes that new technologies will eliminate many monotonous and hazardous jobs, leaving many workers without employment, yet enhancing the autonomy and creativity of those who retain work (De Stefano and Taes, 2023).
While such fears of mass technological unemployment have, so far, proved unfounded, because modern AI can automate some cognitive processes and augment human capabilities not thought to be automatable, AI is more likely to lead to changes within tasks and jobs, rather than mass unemployment (De Stefano, 2019; Doellgast et al., 2023; Fernandez-Macias et al., 2025; Pulignano, 2025; Tolan et al. ,2021). Consequently, AI is likely to impact a broader range of occupations than previous technologies (Tolan et al., 2021).
While digital monitoring and ARM as already in wide use, AI has the potential to enhance digital systems and foster more sophisticated digital monitoring and ARM, support new business models, improve real-time information flow and decision-making for greater efficiency, and generate productivity gains (González-Vázquez et al., 2025; Milanzez et al., 2025; Pulignano, 2025). For instance, AI applications range from automating back-office services and supporting customer services to replacing labour-intensive tasks with AI-enabled robots, smart machines, and computer-aided engineering and design in manufacturing (De Stefano and Doellgast, 2023).
Recent studies have investigated both the uptake of AI (Eurostat, 2024; González Vázquez et al., 2025; Lane et al., 2023; Rammer et al., 2022) and the relationship between AI and working conditions (see Doellgast et al., 2023; Milanez et al., 2025).
Firm-level adoption of AI varies significantly, being higher in large organisations and particular sectors. A recent study found that Denmark and Finland lead the EU with over one-quarter of firms utilising at least one AI technology, significantly higher than the EU average of 13.5% (Eurostat, 2024). In contrast, Germany's adoption rate remains low, with only 5.8% of firms using AI (Rammer et al., 2022). While a recent OECD study in the manufacturing and finance sectors found AI is shaping core business processes (Lane et al., 2023). In finance, common applications include data analytics (52%) and fraud detection (50%). In manufacturing, AI is primarily used for optimising production processes (60%) and maintenance (40%) (Lane et al., 2023). At the worker-level, one-third of EU workers reported using AI tools for work in the past year (González Vázquez et al., 2025). This usage is highly polarised. White-collar workers (managers, professionals, technicians, clerks) show usage above one-third, primarily for text-related tasks, such as writing and translation, whereas elementary occupations report lower usage, at just 6%. Technical-related use of AI is more diverse, appearing in both high-skill and some elementary occupations (González Vázquez et al., 2025).
Despite the potential for productivity and efficiency gains, there is growing evidence that adoption of AI drives task reorganisation and that it might detrimentally impact on workers (Milanez et al., 2025). Across industries, AI-based algorithms are integrated in new tools to monitor worker behaviour and performance, and to automate traditional human resource management processes, such as recruitment and selection, performance evaluation and even termination (Aloisi & De Stefano, 2022; Bernhardt et al., 2023; De Stefano and Doellgast, 2023; Kellogg et al., 2020). This expansion, combined with the practice of digital monitoring and automated task allocation, amplifies the risks of algorithmic discrimination, constant surveillance and unfair data processing (De Stefano and Taes, 2023; González-Vázquez et al., 2025; Milanez et al., 2025).
In terms of occupational safety and health (OSH), AI has a range of potential benefits including the potential to improve worker health, including making some physical tasks safe (Jarota, 2023). Yet AI also introduces new physical, organisational and psychosocial risks, such as increased time pressure, poor mental health at work and fear of job loss (González-Vázquez et al., 2024; Jarota, 2023; Todolí-Signes, 2021; Urzi Brancati, 2024), while malfunctioning AI can lead to workplace accidents and injuries (Jarota, 2023). There is also intersection of physical and psychological risks. For instance, AI processing of workers’ sensitive personal data, like health status, poses a potential discrimination risk (Ajunwa, 2019; De Stefano and Taes, 2023) and may contribute to work-related stress (Parent-Rocheleau and Parker, 2021). Regarding the categorisation of risks, Hassel and Özkiziltan (2023) usefully differentiate between direct risks (AI-induced discrimination, surveillance and information asymmetries) and indirect risks (enhanced by automation). The authors suggest direct risks should be addressed by national legislation, while indirect risks should be monitored at the sectoral level.
While several recent studies have found that both workers and employers are generally positive about AI, there are concerns, including about job loss, that require monitoring (González Vázquez et al., 2025; Lane et al., 2023). Related to this, the way that employers use digital technologies – including AI – is not always visible to workers (Bernhardt et al., 2023) and managers themselves do not always understand how AI decisions are made (De Stefano and Taes, 2023). Relevantly, evidence points to the importance of country-specific factors, such as differing regulations, institutional frameworks and socio-cultural attitudes towards technology deployment for uptake of AI (De Stefano and Doellgast, 2023; Fernández Macías et al., 2025). Moreover, while workers tend to trust to employers when it comes to implementation of AI in the workplace, more can be done to improve trust. In particular, OECD survey findings show that both training and worker consultation are associated with better outcomes for workers (Lane et al., 2023).
Given the abovementioned potential problems associated with AI use in work settings, regulation becomes important. The next section provides an overview of the European approach to AI.
The European approach to AI: human-centric and trustworthy
The EU has adopted a distinctive, human-centric approach to AI policy making (Özkiziltan and Landini, 2025). This is particularly evident in its commitment to ethical and trustworthy AI, whereby the regulatory framework aims to prioritise data privacy, security, and ethical considerations (European Commission, 2018: 14). For instance, the European AI Strategy of 2018 sets out aspirations for the EU to champion an approach to AI that benefits people and society. This goal is linked to values enshrined in Article 2 of the Treaty on European Union and the EU Charter of Fundamental Rights (European Commission, 2018: 14).
As stipulated in the AI Strategy, in June 2018 the Commission established the independent High-Level Expert Group on Artificial Intelligence (AI HLEG), whose members were tasked with developing Ethics Guidelines for Trustworthy AI. Published in 2019, these guidelines defined three interrelated components that must be followed throughout an AI system's life cycle: lawful, ethical and robust. The guidelines call for AI systems to be developed, deployed and used ‘in a way that adheres to the ethical principles of respect for human autonomy, prevention of harm, fairness and explicability’ (High-Level Expert Group on Artificial Intelligence (AI HELG), 2019). To achieve trustworthiness, the guidelines articulate seven specific requirements for AI systems: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; environmental and societal well-being; and accountability (AI HLEG 2019).
The Continent Action Plan on Artificial Intelligence was adopted in April 2025. The Plan defines specific actions and funding instruments to accelerate the development and uptake of AI across sectors (European Commission, 2025a). The Plan sets out details on how the EU intends to become a global leader in AI. In harmonising policy on AI across Member States, the idea is to remove fragmentation, thus creating the right conditions, via the EU's own distinctive approach to AI to drive investment and innovation. The key proposed actions focus on nurturing a strong AI talent base, stimulating investment in AI and enhancing AI skills, developing a strong policy framework to ensure trust and security in AI systems, and internationally promoting the EU's vision for sustainable and trustworthy AI (European Commission, 2025a).
Complementing the Plan, the Apply AI Strategy is the EU's overarching AI sectoral strategy designed to enhance competitiveness of strategic sectors and to strengthen the EU's technical sovereignty (European Commission, 2025b). Under this strategy, Apply AI Alliance (former European AI Alliance) was created to be a forum for stakeholders and policy makers on AI policy and the AI Observatory was established to support sectoral discussions (European Commission, 2025b). Additionally, the EU's aims around AI are also supported by a host of additional policies targeting such areas as digital skills, investment in infrastructure and research funding. Having detailed the EU's overarching approach to AI policy, the next section moves on to discuss the key EU policies in this space.
Key relevant European policy regulations around AI at work
While the EU has not adopted specific legislation about AI in the workplace, it is, arguably, at the forefront of digital policy and regulation, having adopted several initiatives to address the impact of workplace digitalisation. This section discusses key European instruments, including the General Data Protection Regulation (GDPR), the European Directive on improving working conditions in platform work (PWD), the EU Artificial Intelligence Act (the EU AI Act), and other EU regulations related to information and consulting employees, OSH, non-discrimination and digital rights.
The GDPR is a landmark piece of legislation from the EU designed to protect individuals’ data privacy (European Parliament and Council of the EU, 2016). Enacted in 2016 and enforceable since 2018, the GDPR sets out a comprehensive framework for how personal data should be collected, processed and stored. The GDPR is the key horizontal data protection framework in the EU that grants workers individual rights over their personal data and automated decision-making (Aloisi Joppe Halefom, 2025).
Arguably, the GDPR's general principles of lawfulness, fairness, transparency, purpose limitation, data minimisation and accuracy, which if enforced, can mitigate the risk of harmful AI systems (De Stefano and Wouter, 2022). While Article 88 of the GDPR recognises the importance of collective agreements in processing data rights and governing ARM decision-making, the interpretation of these principles in the workplace is largely untested, and several commentators contend the GDPR does not fully address the specific circumstances of the employment relationship (see, e.g. Adams-Prassl et al., 2023; Aloisi et al., 2025; De Stefano and Wouter, 2022).
Introduced in 2024, the EU's PWD represents a significant milestone in regulating work mediated through digital labour platforms, with specific provisions on ARM, including rights to human oversight and worker representation (European Parliament and Council of the EU, 2024). However, its limited application to only work mediated through digital labour platforms, which makes up only a small percentage of the EU workforce, leaves regulatory gaps for ARM in traditional workplaces. As González-Vázquez et al. (2025) illustrate, the main characteristics of platform workers (use of digital tools, being subject to digital monitoring and ARM) apply also to a significant portion of the European workforce, even though they do not formally work for digital platforms.
The EU AI Act, officially adopted in 2024, is the world's first comprehensive AI law (European Union, 2024). The Act establishes a tiered compliance framework that is to be phased in progressively, with most requirements taking effect by mid-2027 (EU, 2024). As a piece of horizontal legislation, the intent of the Act is to ensure that AI systems developed and used in the EU are trustworthy, with safeguards to protect people's fundamental rights. The core goal is to establish a harmonised internal market for AI across the EU based on a product safety and risk-based approach (Özkiziltan and Landini, 2025). The Act established the AI Board to function as the main forum for discussion on AI with Member States. The European AI Office is charged with the responsibility of working closely with affected organisations, and supervision, and enforcement will be in the remit of national market surveillance authorities (EU, 2024).
In the employment context, the Act classifies two types of AI systems as ‘high risk’ (i.e. those that pose a significant risk to fundamental rights): systems used in recruitment and selection (like targeted job advertisements, application filtering and candidate evaluation) and those are used to make decisions affecting work-related relationships (like determinations on promotion, work allocated based on personal traits or behaviours, the monitoring and evaluation of worker performance and conduct, and termination). Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and other persons dealing with AI systems on their behalf, including the persons targeted by such AI systems (European Union, 2025a).
Discussion
While the social partners welcomed the EU AI Act, a number of shortcomings have been identified (see, e.g. Aloisi et al., 2025; De Stefano and Wouter, 2022; Özkiziltan and Landini, 2025). For example, Aloisi et al. (2025) contend that the technology-centred approach to ARM prioritises compliance with technical standards over the protection of workers’ rights, while De Stefano and Wouter (2022) argue that once an AI system is deemed ‘trustworthy’, employers are free to use AI in the workplace. Further, Wörsdörfer (2024) cites its lack of enforcement, absence of procedural rights and remedies, inadequate worker protection and institutional ambiguities among its main weaknesses. De Stefano and Taes (2023) caution that EU-level legislation to regulate AI and platform work risk can be ineffective or counterproductive, and they warn that EU-level legislation could unintentionally undermine more promising and effective national regulations and trade union actions.
Özkiziltan and Landini (2025) contend that the Act's shortcomings may give rise to two key trends that could significantly influence the future of work in Europe, namely the increasing autonomy of AI tech companies in AI system design, as self-assessment and self-certification of compliance of high risk status, and an increase in the power of employers, as its market-driven objectives risk undermining workers’ rights.
Crucially, the abovementioned laws intersect with other existing EU policies that will frame the future of AI in European workplaces. For instance, the European Declaration on digital rights and principles for the digital decade sets out intentions and commitments to guide policymakers on digitalisation. This Declaration, drawing on the EU's human-centric approach, calls for digital education, training and skills, fair and just working conditions, as well as freedom of choice in interactions with algorithms and AI systems, among other matters (European Union, 2023).
The 2002 Directive on informing and consulting employees provides a general framework to consultation when AI is introduced (European Parliament and Council of the EU, 2002) and the Occupational Safety and Health Framework requires employers to perform risk assessments when introducing AI into the workplace (Council of the EU, 1989). However, EU-level and national-level frameworks may need to undergo a review to ensure they are adequate for dealing with changes to work organisation when AI is introduced. The EU AI Act primarily places liability on the supplier, requiring the employer to only use AI tools according to instructions. Jarota (2023) argues that this product-based approach fails to address the employer's fundamental OSH obligations. So, the OSH Framework may need to be strengthened because effective legal safeguards are needed to explicitly establish the employer as responsible for assessing, controlling, reducing or eliminating AI-related risks to workers (Jarota, 2023). Furthermore, while EU non-discrimination laws are relevant (e.g. Directive 2000/78/EC against discrimination at work in European Parliament and Council of the EU (2000), and Directive 2006/54/EC in Council of the EU (2000)), De Stefano and Wouter (2022) point out that lack of clarity around the point at which courts would require employers to respond to allegations of discrimination related to AI systems, has led to calls for scrutiny procedures, such as certification and auditing (De Stefano and Wouter, 2022).
In summary, while several existing EU regulations cover certain aspects relevant to the regulation of AI at work, a further strengthening of some of these instruments may be required. In its political guidelines for the 2024–2029 mandate, the European Commission announced a forthcoming initiative on ARM and possible legislation on AI in the workplace, following consultation with social partners (European Commission, 2024). However, the Commission recently stated that experience in applying the new horizontal rules is required before the introduction of any possible new legislation on AI (European Commission, 2025a).
Conclusion
Although the EU is seen as a policy leader in regulating AI, several shortcomings were identified with the existing policy initiatives, specifically around how EU policy addresses risks to workers. The main challenge facing the EU in establishing a regulatory framework for AI and ARM in the workplace is to strike a balance between fostering economic innovation and productivity gains while simultaneously safeguarding workers from associated harms, such as work intensification, reduced autonomy, algorithmic discrimination and physical and psychosocial risks. The use of AI and ARM is growing, impacting the full range of employer functions, from recruitment to termination, necessitating robust governance.
Critically reviewing the EU's landscape on AI at work starts with its foundational commitment to a ‘human-centric’ and trustworthy approach to AI. While the GDPR offers data protection, its application to the specific dynamics of the employment relationship remains largely untested. Furthermore, despite the new PWD containing provisions around ARM, the narrow scope of coverage means that it only applies to digital platform (e.g. gig) workers. Consequently, a clear regulatory gap exists because most of the workforce are engaged under traditional employee–employer arrangements, therefore excluded protection under the PWD.
The EU AI Act classifies two types of AI systems as ‘high risk’: systems used in recruitment and selection and those are used to make decisions affecting work-related relationships. Nevertheless, some shortcomings have been identified. The Act is viewed by some critics as a ‘technology-centred approach’, prioritising technical compliance over the explicit protection of workers’ rights. There are concerns that this framework risks increasing the power of both AI technology suppliers and employers, potentially lacking sufficient mechanisms for enforcement, procedural rights and remedies for workers. The future uptake of AI in the workplace, including AI-driven management tools, will depend on factors like cost, trust in management, and in some cases, worker pushback linked to broader concerns around the impact of digital technology and surveillance capitalism on society as a whole. This complexity demands stronger safeguards at all regulatory levels.
Existing EU policy is inadequate. The suite of existing policies in this space may also require revision to effectively manage workplace AI risks. Specifically, the OSH Framework needs strengthening because the AI Act's product-based liability approach fails to address the employer's fundamental responsibility for assessing and mitigating AI-related risks to health and safety. Similarly, non-discrimination laws need clearer scrutiny procedures, such as certification or auditing of AI systems to address algorithmic bias. Future governance, as indicated by the European Commission, may involve a specific initiative on ARM and possible legislation on AI in the workplace. However, the current policy intention is to first gain experience in applying the newly adopted horizontal rules of the AI Act before pursuing new, more targeted legislation.
Ultimately, policy needs to be more technology-neutral, focusing on the wider challenges of digital transformation, not just those posed by AI. Emphasis must shift to enabling workers’ cognitive upgrade while protecting them from AI flaws like hallucination. This requires substantial investment in education and training to reinforce digital and non-automatable soft skills. On regulation, collective agreements are the best mechanism to address sector-specific technology use, provided they have strong institutional support.
Finally, the industrial relations research agenda must prioritise investigating AI and ARM applications that threaten job quality. Concurrently, expanding the evidence base on how social partners are influencing the design and implementation of these systems in the workplace will inform debates on achieving a human-centric and trustworthy approach to AI.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
