Abstract
In this article, we consider the legal frameworks that enable workers to influence the deployment of new workplace technologies in the United Kingdom and the future of worker voice and algorithmic management in a post-Brexit Britain. The article demonstrates how the legal mechanisms that facilitate voice at work, primarily collective bargaining via trade unions, can be leveraged to influence employers’ choices regarding algorithmic management. However, it also identifies both familiar and novel challenges regarding using these routes to ‘negotiate the algorithm’. The article then outlines major regulatory proposals emerging from the EU that would establish greater co-determination in this context and assesses their relevance to the UK labour market. It concludes by considering whether specific regulatory measures are necessary in the UK context to enhance the exercise of worker voice regarding the deployment of algorithmic management and close the widening gap between the position of UK and EU workers.
Introduction
The world of work is undergoing a transformation because of the impact of artificial intelligence (AI) technologies. The introduction of new technologies into the workplace has long been a cause of conflict and contestation between workers and employers because of worker concern about job destruction and degradation of working conditions (Jones, 2013). This issue has taken on a new dimension in recent years, however, as employers are increasingly using AI to undertake managerial functions and assist in governing the workplace. Technologies based on complex algorithms and machine learning are being used throughout the lifecycle of the employment relationship (TUC, 2020): from deciding who to hire, issuing instructions regarding the allocation and performance of work, right through to evaluating worker performance and triggering disciplinary processes and dismissal (see Ajunwa, 2018; Baiocco et al., 2022; Bales and Stone, 2020; De Stefano, 2019; Wood, 2021).
As with previous iterations of the debate about technology and the future of work (Lederer, 1933; Standing, 1984), machines are now undertaking tasks previously performed by human workers. The key difference is that it is managerial rather than production processes that are being either entirely or partially automated (Adams-Prassl, 2019). These practices are commonly known as ‘algorithmic management’ (Mateescu and Nguyen, 2019). This label is adopted here, albeit with the caveat that it is potentially misleading as human managers themselves sometimes rely upon algorithms in their decision-making, for example when ranking job candidates or selecting workers for redundancy. In response to the significant risks that algorithmic management poses for workers’ rights and interests, labour lawyers have argued that they must be able to exercise voice over the deployment of these technologies (De Stefano, 2019; Rogers, 2020). This is encapsulated in the idea that ‘“negotiating the algorithm” should [. . .] become a central objective of social dialogue’ and a key focus of the labour movement (De Stefano, 2019).
Although worker voice is crucial for securing dignity and decent working conditions in the age of AI, the phenomenon of algorithmic management presents new obstacles to achieving industrial democracy. This article examines the issue of algorithmic management and worker voice in the United Kingdom. It makes three significant contributions to existing work on this topic: it identifies the conceptual and practical reasons why algorithmic management poses a serious threat to effective worker voice, assesses whether workers can leverage existing legal frameworks to ‘negotiate the algorithm’, and considers how to strengthen worker voice in the era of algorithmic management. The argument proceeds as follows. Part 2 provides an overview of algorithmic management practices and identifies the novel challenges that they present for securing voice at work. Parts 3 and 4 set out and assess the existing frameworks and mechanisms for worker voice in the United Kingdom, arguing that they provide only tokenistic opportunities for workers to have their say in the adoption and implementation of algorithmic management. In response to this deficiency, Part 5 evaluates the potential of regulatory frameworks emerging from the EU as a source of worker voice in this context and Part 6 concludes by outlining a new regulatory approach which aims to ensure that workers’ interests are adequately reflected in both the development and deployment of new workplace technologies.
Algorithmic management and worker voice
Technology-driven management strategies originated in the platform economy, particularly in the sectors of driving and delivery work. While this is where they remain concentrated, algorithmic management is now deployed in a variety of workplace settings to direct, evaluate and discipline workers (Wood, 2021). Relying on extensive monitoring and live data collection regarding task performance, companies can manage large numbers of workers with minimal investment in managerial staff. Employers can even outsource real-time evaluation of workers to customers and clients through reliance on rating and reputation systems, as a result of which workers may experience ‘continuous worry about the future of their ability to access work, ask for decent pay, and maintain reasonable working conditions’ (Wood and Lehdonvirta, 2021).
Amazon is one example of an enterprise that has integrated a variety of algorithmic management strategies into key parts of its business. ‘Associates’ in warehouses carry handheld devices that give them instructions on a task-by-task basis, including the location and speed at which they are expected to work (Delfanti, 2021; Wood, 2021). The device feeds back a variety of data which is then used to assess and rank the performance of each worker. Good metrics can secure the worker a better job, while bad metrics can lead to the termination of an individual’s employment (Briken and Taylor, 2018; Lecher, 2019). Elsewhere, delivery drivers report being dismissed on the basis of data collected about their performance via an app installed on their phones (Soper, 2021). These dismissals are notable not just because of their data-driven nature but also because they are carried out entirely by ‘bot’ without any human interaction.
While algorithmic management may offer employers efficiency and productivity gains, these techniques and tools have serious and varied negative consequences for workers. First, algorithmic management heightens the level of control that employers can exercise, as instructions can be issued automatically in real-time based on technological surveillance and evaluations of work. This accentuates workers’ existing levels of subordination (De Stefano, 2020) and their lack of ‘republican freedom’ from domination (Pettit, 1997) in the workplace, because the algorithmic decisions that control their working lives will frequently appear arbitrary and there is little opportunity to influence the set of rules that workers must live by. The enormous amount of data and monitoring that provides the infrastructure for algorithmic decision-making also presents a variety of risks to workers’ human rights (Atkinson and Collins, in press).
In addition, the use of algorithmic management tools has negative effects on working conditions and the quality of work (Parent-Rocheleau and Parker, 2021). Not only do they drive fragmentation and fissuring in the labour market by enabling employers to source and manage labour outside the traditional boundaries of the firm, but they are also associated with the deskilling of jobs and higher risks to occupational safety and health (Gregory, 2021; Lenaerts et al., 2022). In terms of ensuring a healthy and safe working environment, Todolí-Signes (2021) observes that constant monitoring can lead to stress, anxiety and burn-out, as well as unnatural behaviour in the presence of monitoring. Work intensification and a reduction in worker autonomy, which reduce job satisfaction and trust in one’s employer, have both been reported in relation to algorithmic management (Todolí-Signes, 2021).
The degradation of the quality of work and working conditions precipitated by algorithmic management highlights the need to secure and facilitate worker voice over the use of AI at work. De Stefano (2019) has argued that workers’ interests must shape management strategies in this area, with trade unions and collective bargaining offering a route for ‘negotiating the algorithm’. As well as its instrumental benefits in achieving decent working conditions, the ability to influence and shape one’s own working conditions is valuable for self-realisation, as participating in decision-making develops our intellectual and moral faculties by providing a unique opportunity to form, articulate and pursue our own conceptions of the good life (Dahl, 1985: 94–101). Participating in decision-making and the creation of the rules that bind us at work also supports self-esteem and well-being, as such processes involve being ‘recognised and affirmed as equals’ by others (Christiano, 2015: 464). Mechanisms promoting worker voice, such as collective bargaining, have long played a central role in achieving dignity and decent conditions at work and preventing abuses of managerial power. Seeking to exert collective control over algorithmic management and retaining worker voice in the age of AI are therefore rightly becoming a priority for trade unions and the labour movement (on union responses, see Doellgast and Wagner, 2022).
Collective bargaining provides a flexible form of regulation that can be tailored to the context of specific firms or industries. It is more responsive and tailored than the blunt instrument of employment legislation, which lays down general standards applicable to the entire labour market and is incapable of keeping pace with the rapidly changing algorithmic management practices of employers. For this reason, legislation regulating algorithmic management will always need to be accompanied by frameworks that allow workers to bargain collectively over the use of these technologies. There are, however, several difficulties when it comes to achieving effective worker voice in the context of algorithmic management. Algorithmic management contributes to the fragmentation of labour markets and the workforce by enabling companies to exercise control outside the traditional boundaries of the firm. In some cases, this fragmentation results in the denial of the employment status required for statutory labour rights and the creation of a barrier to workers engaging in bargaining or industrial action (see, for example, Deliveroo, discussed below).
In addition to the familiar problem of fissured workplaces, there are at least two new and significant hurdles to achieving effective worker voice in the context of algorithmic management. First, the lack of transparency concerning how algorithmic management systems assign tasks, monitor and evaluate workers, and take disciplinary decisions. In many instances, employees do not even know whether they are being managed remotely by a human or by an algorithm (TUC, 2020). Furthermore, substantial technical expertise and access to potentially proprietary or confidential information about the system’s workings are necessary to understand and challenge the inner workings of these technologies.
The second factor complicating the ability of worker voice to shape the use of AI and algorithmic management in workplaces is the fact that employers will usually not design these systems themselves, but rather purchase them from a third-party supplier. This adds an additional party into the situation, one with which employees and unions have no relationship, but which ultimately controls many aspects of their working lives. The trilateral relationship that exists between workers, employers and technology companies producing and selling algorithmic management tools is a new variation on existing scenarios; for example, where the employer’s functions are split across different entities, as in agency and platform work (Prassl, 2015). Labour law has struggled to attach accountability appropriately in such situations. This division of employer functions between software developers and the user company raises urgent questions ‘about who should be responsible for managerial decisions that are taken (solely or jointly) by AI systems, and whether employers can be held accountable where they do not understand or cannot foresee the results’ (Atkinson, 2021). This triangular relationship also presents a challenge for worker voice because there is currently little scope for unions to influence the internal processes of algorithmic management tools by negotiating directly with the companies developing the software. Unless these various challenges can be overcome, algorithmic management will function as a barrier to worker voice and frustrate the goal of achieving decent work in the era of AI.
Negotiating the algorithm in the United Kingdom
Despite the importance of establishing worker voice over the use of workplace technologies, the default position in English law is that workers have no say over the systems and processes adopted by employers. The common law provides employers with a broad managerial prerogative to govern the workplace, including in respect of the technologies that they implement in production and management. This freedom to manage the workplace unilaterally is generally assumed to follow from employers’ property rights in the company, in combination with their residual common law liberty to do whatever is not legally prohibited. It is buttressed by default implied terms in all contracts of employment that require obedience and cooperation from employees, including in relation to new technologies (Cresswell v Board of Inland Revenue [1984] ICR 508).
Although employers are also subject to implied terms in the employment relationship and so must act in a manner compatible with their duties of care and trust and confidence when issuing instructions about the adoption of new workplace technologies, there are no general provisions in English law that displace the managerial prerogative in this context. In contrast to some other jurisdictions, such as France and Italy (Armaroli and Dagnino, 2019; De Stefano, 2020), employee consent is not required prior to the introduction of workplace surveillance or algorithmic management technologies and nor is authorisation by collective agreement. There are, however, some avenues open for workers to exercise some influence and voice in this context.
Collective bargaining
The primary vehicle for worker voice and participation in the United Kingdom is the ‘single channel’ of trade unions participating in collective bargaining with employers (Dukes, 2008). Although employers can be prompted or even forced to enter into negotiations under the statutory recognition procedure discussed below, voluntary agreements regarding bargaining arrangements remain the dominant and preferable route for workers seeking to ‘negotiate the algorithm’. Any such agreement will fall within the statutory definition of a collective agreement to the extent it relates to the terms and conditions of employment, how technology is used to recruit or dismiss workers, or to allocate tasks and duties (Trade Union and Labour Relations (Consolidation) Act 1992 (‘TULRCA’) section 178(2)).
Unions recognised by an employer are free to seek to add algorithmic management practices to the matters covered by their collective agreements. ‘Recognition’ of a union denotes that the employer has agreed to negotiate with that union on specified matters. Recognition may be accepted voluntarily by the employer or compelled via the statutory recognition procedure. The relevant procedure is contained in TULRCA Schedule A1 and to achieve compulsory recognition the union must show that: the employer has more than 21 employees; at least 10 per cent of the proposed bargaining unit are union members, and the majority of that unit are likely to support recognition (TULRCA Schedule A1, paragraph 36). The Central Arbitration Committee (CAC) will then declare the union recognised if a majority of the bargaining unit are already members (Schedule A1, paragraphs 22 to 23) or order a ballot, in which the union must win a majority in favour of recognition with at least 40 per cent overall support in the bargaining unit (Schedule A1, paragraph 29).
How might a union leverage this statutory framework to gain influence over algorithmic management strategies? Following a successful application for recognition, the union has the right to enter into collective bargaining over terms and conditions related to the core issues of pay, hours and holidays (Schedule A1, paragraphs 3 and 31), as well as a right to information that is necessary for the purpose of collective bargaining (TULRCA, section 181). The recognition framework therefore provides a right to negotiate over the use of AI and algorithmic technologies in connection with these matters; such as where they are used to determine the rate of pay, set or allocate shift patterns that determine working hours, or authorise or process requests for annual leave. Individuals working for Uber and Deliveroo, for example, have their rate of pay for each task determined by an algorithm so a union could use the recognition procedure to get a negotiating foothold in relation to that aspect of management. That said, the issues of pay, hours and holidays have long been criticised as an ‘impoverished’ agenda for mandatory bargaining (Bogg, 2009: 284), and many areas in which employers have delegated their functions to AI will fall outside the scope of the statutory recognition and bargaining procedure. This includes the use of algorithmic management in recruitment, discipline and work allocation and evaluation. The legislation thus falls far short of conferring a general right to negotiate on the use of workplace technologies.
It is possible that a successful application for recognition under the statutory procedure might prompt employers to enter into collective bargaining over a wider range of issues than those mandated by the legislation, including the use of algorithmic management. There is some evidence that employers choose to recognise a union voluntarily after it has started an application under the statutory scheme (Gall, 2020). Refusal to enter into negotiations on working conditions and terms of employment would also amount to a ‘trade dispute’ under TULRCA section 244 and so provide grounds for industrial action. Once recognised for a limited purpose, unions might therefore use strike action, or the threat thereof, to pressure employers to enter into further negotiations or collective agreements related to algorithmic management practices.
Gaining recognition is not the only challenge for unions seeking to negotiate effectively in this area, however. The complex nature and lack of transparency over the internal workings of AI-driven technologies means that union representatives may lack the understanding necessary to engage in meaningful negotiations regarding algorithmic processes, particularly if they have not received specific training. Indeed, employers themselves will often lack knowledge of the technologies’ internal processes or the ability to implement changes to these processes for intellectual property reasons. But even where the internal logic of algorithmic management tools cannot be the subject of genuine bargaining, other important matters remain on the table. It remains possible for unions to negotiate with employers about the adoption and implementation of technology in the workplace; what, when and how data are gathered about staff, for what purposes data and technologies are used, and what safeguarding and oversight processes are in place. It would be a major victory, for example, if Amazon workers were able to reach a collective agreement guaranteeing they would not be disciplined on the basis of certain workplace metrics or for Deliveroo riders to be assured that turning down deliveries would not result in a reduction in work being offered or some other penalty. We can see that it is therefore possible for unions to have meaningful influence over technology in the workplace and to shape the quality and conditions of work in the age of AI, even in circumstances in which they cannot access or understand its inner workings. The obscured nature of these technologies, often described as the ‘black box problem’, is not an insurmountable barrier to achieving these important goals.
Indeed, there are several instances in which unions in the United Kingdom have successfully reached an agreement regarding algorithmic management. One example is the agreement reached between the GMB union and the company Hermes, which covers some of the algorithmic processes used by the company to manage their workforce of delivery drivers. Although it does not amount to full-scale negotiation of the algorithmic management used by the company, the agreement required the company’s automated payment system to be reprogrammed to ensure that workers receive at least the minimum wage and are automatically paid any bonuses they have earned rather than having to claim for them retrospectively. The agreement also provides unions with the opportunity to conduct health and safety audits following incidents, thereby allowing them to flag up instances of algorithmic management practices that are causing safety issues, as well as introducing a process for workers to challenge decisions made by technology (Rolf et al., 2022). Another relevant agreement has been concluded between the Communication Workers Union and the Royal Mail Group. The section titled ‘Technology’ contains safeguards for existing pay and hours if new processes are introduced and guarantees that human decision-making will continue to be central to operations. These examples demonstrate the potential of collective bargaining to regulate algorithmic management, but the extent to which existing agreements in the United Kingdom contain provisions related to the adoption and implementation of new technologies, and their effectiveness, is unclear.
Consultation
While less prominent, there are other legislative mechanisms that provide workers with information and consultation rights in the context of algorithmic management. It is worth noting at this stage that data protection law provides workers with important rights over how their data are used, including some that might be relied upon to resist algorithmic management, and that it is possible for workers to authorise trade unions to exercise and enforce these rights on their behalf (Data Protection Act 2018, section 187; GDPR, Article 80). This raises the intriguing possibility that trade unions may be able to utilise data protection law on behalf of their members to influence algorithmic management practices. The focus here, however, is primarily on the effectiveness of more traditional labour law mechanisms in securing worker voice over these matters.
Most significant in the UK context are the Information and Consultation of Employees Regulations 2004 (‘ICER’), implementing the Information and Consultation Directive 2002/14. These Regulations introduce a framework for mandatory consultation with workers over managerial decisions, although only in undertakings of 50 or more employees. Where requested by more than 2 per cent of the workforce (subject to a minimum of 15 employees), an employer must enter into negotiations with workers or their representatives on the scope and content of information and consultation rights (ICER, Regulation 7). These negotiated agreements can include consultation rights related to workplace technologies and algorithmic management. Where it is not possible to reach a bespoke agreement, the standard information and consultation provisions will apply. These require consultation on the structure and probable development of employment within the undertaking and consultation ‘with a view to reaching agreement’ on decisions likely to lead to substantial changes in work organisation or in contractual relations (ICER, reg 20). The regulations therefore provide a route by which workers can impose an obligation on employers to consult over the use of algorithmic systems that will impact the organisation and management of work significantly.
Another source of consultation rights relevant to the use of algorithmic management is legislation on workplace health and safety. Where a recognised trade union exists, a safety representative must be appointed who employers are required to consult with in order to develop and check the effectiveness of health and safety measures (Health and Safety at Work etc Act 1974, section 2(6)). If no union is recognised, employers have obligations to consult employees directly or a non-union elected representative (Health and Safety [Consultation with Employees] Regulations 1996, reg 17). There is a general duty to consult over the introduction of any measures that may substantially affect employee health and safety, with more specific duties to consult on the health and safety consequences of introducing new technologies (Safety Representatives and Safety Committees Regulations 1977; Health and Safety [Consultation with Employees] Regulations 1996). Given the mounting evidence of the occupational health risks created by algorithmic management, it is strongly arguable that employers will always have a duty to consult over the health and safety implications of adopting these technologies.
Despite providing some entitlements in respect of algorithmic management, however, there are significant shortcomings with these consultation frameworks as means for workers to negotiate the algorithm. First, they have largely failed to take root in the United Kingdom because of its single-channel system of industrial relations, within the framework of which unions act as the sole institutions representing workers. Although the threshold for triggering employer obligations under ICER was reduced from 10 to 2 per cent in April 2020, it is not yet clear whether this will lead to increased use of this framework. Second, the limited application of ICER to companies with more than 50 employees excludes the majority of employers. Finally, and most significantly, consultation and information mechanisms are a weak form of worker voice. While information and consultation rights are helpful in raising worker awareness of the algorithmic practices being deployed in the workplace they do not provide meaningful influence over management, meaning that employers ultimately retain their unfettered prerogative to make unilateral decisions in respect of workplace technologies.
Out of the loop: inadequacy of existing frameworks for worker voice
Turning to collective bargaining as the most likely route by which workers may influence management structures, there are several major hurdles to workers successfully reaching an agreement with employers over the use of workplace technologies. The first, which will rule out union recognition and negotiation for many workers, is the low level of trade union membership in the United Kingdom. Recent OECD data show that the percentage of employed people in the UK who are trade union members is currently 23 per cent and the percentage who enjoy (some) terms or conditions established by collective bargaining lags behind other European states at just 26 per cent (OECD/AIAS, 2021). People who work in businesses with low union support and no existing agreements in place are unlikely to be able to use the recognition and agreement framework to negotiate with their employers in the short term. While there are legal mechanisms that allow workers to influence algorithmic management structures, in practice the majority will be unable to do so.
The problem of limited collective bargaining coverage is further exacerbated by the fact that those sectors of the labour market in which algorithmic management is most developed are among those with the lowest levels of union membership and collective bargaining coverage. Workers managed in this way are also often precariously employed, for instance on zero-hours or short-term contracts or those that appear to provide for self-employment, which creates further practical difficulties for union organising (Wright, 2013). Collectivisation efforts are less likely to succeed where workers are fearful of losing their jobs or lack the common and stable working hours or communal workplaces that provide opportunities for interpersonal interactions and building worker solidarity.
The concentration of algorithmic management among precarious workforces highlights potential legal barriers to these workers exercising voice collectively. Platform or gig workers who are subject to algorithmic management may well struggle to identify a suitable ‘bargaining unit’ for the purpose of union recognition and negotiations. Bargaining units must usually have identifiable boundaries, such as working in a particular location (see R(Cable & Wireless Services UK) v CAC [2008] ICR 693), but gig work is fragmented in a variety of senses so does not easily fit this model. More fundamentally, the legal definition of a trade union in the United Kingdom is a collection of ‘workers’, and only such entities may seek recognition and negotiate through the statutory route. Access to collective bargaining and the ability to undertake lawful industrial action therefore depends on an individual’s status as either an employee or a worker. Because of their superior bargaining power, employers can attempt to engage individuals in a manner that falls outside the legal category of ‘worker’ and therefore deny them the right to be represented by a union in collective bargaining and to exercise voice over algorithmic management practices in their workplace.
The high-profile case of Deliveroo illustrates the problem precarious workers have in asserting their right to bargain collectively (R(IWGB) v Central Arbitration Committee [2021] EWCA Civ 952). The Independent Workers of Great Britain union (IWGB) applied to the Central Arbitration Committee (CAC) to seek recognition by Deliveroo for the purpose of collective bargaining. The CAC found the riders had a genuine and unconditional right to allow others to perform work using their account and thus lacked the obligation of personal service needed for worker status. The Court of Appeal upheld this conclusion, as a result of which the union was not entitled to seek recognition for collective bargaining via the statutory framework. As individuals whose working lives are under significant algorithmic control, accessing the recognition framework would have enabled the union to at least attempt to rein in and shape Deliveroo’s management strategies. Although to date this has been blocked by a legal technicality, permission has been granted to appeal by the Supreme Court. Interestingly, a recent voluntary agreement between Deliveroo and GMB, a different UK union, appears to confirm the self-employed status of Deliveroo riders; something that others would certainly contest (Atkinson and Dhorajiwala, 2019; Bogg, 2019).
We have seen that mechanisms for worker voice in the United Kingdom are hampered by low trade union density and legal restrictions on their scope. There are also severe limitations for workers who are successful in accessing these frameworks. If a union manages to achieve recognition, whether voluntarily or through the statutory procedure, there remains no obligation for employers to reach or even seek to reach an agreement nor is there a referral mechanism for compulsory arbitration. Furthermore, as discussed above, many instances of algorithmic management will fall outside the scope of the mandatory topics for collective bargaining. Industrial action is the primary means by which unions can pressure employers to agree to their demands over the use of workplace technology, but UK law is notoriously restrictive in its approach to lawful industrial action (see Ford and Novitz, 2016). Unions are stymied by extensive procedural and quorum requirements, which if not complied with allow employers to seek an injunction or sue for extensive damages if industrial action goes ahead.
There is a final limitation of these collective bargaining processes that is likely to bite in circumstances of algorithmic management. As has been touched upon, AI management tools will usually be purchased from a third-party company, creating a triangular relationship between the worker, their employer, and the company creating the tools that control their work activities. The recognition procedure can be used only to seek negotiations with one’s direct employer, even if, in practice, substantial aspects of one’s working conditions are determined by a different party. In circumstances in which an employer is buying in rather than developing their own technology, workers may still be able to bargain over where, when, and how algorithmic management tools are used in the workplace. But to achieve genuine influence over the algorithmic processes that are being deployed to direct and manage work they need an alternative way to make themselves heard by the third-party software suppliers.
The European Union and algorithmic management of work
Recognising the step-change that AI represents in the management of workers, the EU has responded with a number of proposed measures. If enacted, these would not have any legal weight in post-Brexit Britain, but nevertheless provide valuable guidance and inspiration for domestic legislation. In addition, there would be pressure on the United Kingdom to adopt equivalent measures because the UK–EU Trade and Cooperation Agreement requires that the UK keep pace with EU regulation where this is necessary to prevent significant barriers to trade.
At first glance, the most relevant text would be the proposed draft AI Regulation (Proposal for an Artificial Intelligence Act, COM(2021) 206 final). This draft places AI that manages work and workers in the ‘high-risk’ category (see Annex III of the draft Regulation), which entails compliance, governance and accuracy responsibilities, as well as requiring human oversight. No external control is necessary, however, as the draft Regulation relies on self-assessment by the developer, a choice that has been strongly questioned (Kelly-Lyth, 2021). The draft Regulation can also be criticised for focusing primarily on the developer of the AI system, secondarily upon the party that implements the system (that is, the employer in cases of algorithmic management), and not at all on the subjects of the system, such as the workers who are being managed by a high-risk system. It does not grant workers with rights to provide input into the development or implementation of the technology or to challenge its results and therefore unfortunately has little to teach us in terms of giving individuals voice in an algorithmically managed workplace. A second major criticism of its current drafting is that the responsibilities outlined in the text could be seen as the maximum expected by developers and employers, rather than the minimum (De Stefano and Aloisi, 2021). This would cast serious doubts over collective agreements or legislation that give additional rights to unions during implementation or otherwise regulate the use of technology to manage workers.
More promising is the proposed Directive on platform work, which is tailored more specifically to an area of burgeoning algorithmic management (Proposal for a Directive of the European Parliament and of the European Council on improving working conditions in platform work, COM(2021) 762 final). Given the obscured nature of algorithmic management, one crucial aspect of the proposed Directive for trade unions or workers seeking to shape technological processes is Article 6, which would oblige employers to inform staff about automated monitoring or decision-making systems that are in place. This information must include the categories of monitoring, the types of decisions being made by technology, the parameters of decision-making, and the ‘grounds’ for major decisions, such as to terminate, restrict or suspend a worker from the platform. While this has obvious benefits for individual staff members understanding their own treatment, it also provides workers with important information for collective bargaining, as knowledge of algorithmic management is a precondition of effective negotiations over its use. The draft also includes a right to explanation and review of an automated decision and obligations to monitor automated systems. Although the Directive would apply to algorithmic management only in the context of platform work, these rights, importantly, would apply to all digital platform workers, irrespective of their employment status. This significant move recognises that individuals performing platform work deserve these protections regardless of the precise shape of their relationship.
Disappointingly, the Directive contains no mention of the potential role that trade unions could play in monitoring automated systems, but Article 9 does require that employers consult worker representatives where the introduction of, or substantial changes to, monitoring systems are likely and entitles these representatives to be assisted by an expert. Additionally, a different form of support is given to unions representing platform workers, which is designed to help overcome the challenges of organising a remote workforce. Article 15 places platforms under an obligation to ensure that workers can contact each other and their representatives through the platform’s infrastructure or other effective means and requires states to legislate to prevent digital platforms from accessing or monitoring these communications. The establishment of a confidential method of communication via the app through which workers offer their services would be extremely useful to trade union organisers and help overcome a key barrier to union organising that is created by the nature of platform work.
If enacted, the Directive would establish a weighty set of new rights which the United Kingdom would do well to adopt. Its fundamental shortcoming, however, is that the scope of these rights is limited to digital platform workers. This is unjustifiable given that an Amazon worker managed by an automated handheld device and subject to extensive monitoring and algorithmic decision-making is not, we would argue, in a substantially different position to an Uber driver subject to an app’s direction and control. Rights to information, to receive an explanation and to benefit from monitoring obligations, union communications, and consultation in relation to algorithmic management strategies should therefore not be confined to digital platform workers. These rights are needed by all workers in response to the spread of algorithmic management practices.
Despite its flaws, the proposed Directive on platform work would be a significant step forward in facilitating voice for algorithmically managed workers and improving their working conditions. We therefore argue that the United Kingdom should adopt this measure as the starting point and baseline for its future regulation of algorithmic management. This is highly unlikely in the current political and regulatory climate, however. The United Kingdom in the post-Brexit period is a highly hostile environment for workers. Among other things, the Conservative government has changed the law to permit agency staff to replace workers taking industrial action (see the Conduct of Employment Agencies and Employment Businesses (Amendment) Regulations 2022) and has promised further legal restrictions on trade unions. Indeed, the government’s attitude towards EU regulation is reflected in the recently introduced Retained EU Law (Revocation and Reform) Bill, which, if enacted, would repeal all secondary legislation implementing EU law unless positively retained by the end of 2023. This drastic measure threatens to remove a broad range of employment rights derived from EU law, including the consultation rights discussed above.
Pending a change in government, the United Kingdom is also unlikely to adopt any EU regulation on algorithmic management, given its current hands-off and deregulatory approach to AI and data rights. For instance, both the National AI Strategy (Office for Artificial Intelligence et al., 2021) and the government’s AI Regulation Policy Paper (DCMS and DBEIS, 2022) reject the need for general legislation governing AI in favour of continuing its sector-led and soft-law approach. Furthermore, domestic data protection law is undergoing deregulatory reforms aimed at ‘reducing the burdens on business’ and increasing productivity and job creation (DCMS, 2022).
The future of voice and algorithmic management in the United Kingdom
It has so far been argued that domestic legal frameworks fall short of enabling workers to fully ‘negotiate the algorithm’ and that the current government is unlikely to adopt any future EU-led regulations of AI and algorithmic management. The looming divergence between the United Kingdom and the EU on these issues may well trigger the dispute resolution process under the EU–UK Trade and Cooperation Agreement and potentially lead to rebalancing measures. Under Article 411 of the agreement, consultation and arbitration processes can apply where the United Kingdom’s failure to keep pace with EU regulation amounts to a ‘significant divergence’ that has ‘material impacts on trade or investment’. It seems plausible that these conditions will be met if the United Kingdom fails to implement the proposed AI Act and the Directive on platform work or equivalent measures. Indeed, the United Kingdom’s deregulatory approach is expressly intended to gain an economic advantage by attracting AI companies and investment to the United Kingdom. It is doubtful that the threat of arbitration and potential rebalancing measures will be sufficient to prompt any change in direction.
If the United Kingdom were to strike out on its own rather than adopt the EU measures discussed above, what regulatory approach should it adopt to ensure that workers can exercise voice and have their interests taken into account in the age of AI-driven workplaces? Without providing a detailed account of specific reforms, it is possible to identify three broad strategies that would be necessary to overcome the challenges posed by algorithmic management and which should be pursued as part of any adequate regulatory response.
The first element is the establishment of mechanisms that provide workers with a greater degree of information about what technologies are being used in the workplace and how. Workers are too often unaware of the algorithmic tools being used to perform managerial processes and functions, making it impossible for them to form and express views on these matters. Individuals do have rights under data protection law to be informed of, and provided with, the personal data held by their employers (GDPR Articles 13 to 15). However, the informational asymmetry that currently exists would be better resolved by introducing a collective dimension and extending employers’ obligations to consult on the introduction of new workplace technologies wherever new technologies will have a significant impact on managerial processes or working conditions. Additionally, and drawing from existing health and safety law, these new duties should be coupled with the creation of mandatory workplace representatives for the purposes of consultation about workplace technologies. The UK trade union Prospect has suggested that local union branches appoint a ‘technology representative’ and seek to establish dedicated Technology Forums to discuss these issues with employers (Prospect, 2021) and the union Unite advocates the creation of ‘new technology officers’. We suggest that these promising initiatives should be formalised and underpinned by law.
While undoubtedly important, information and consultation processes provide workers with little scope to influence the adoption and implementation of algorithmic management practices. They must therefore be coupled with frameworks that enable workers and trade unions to make themselves heard and have their interests taken into account in this context. Part of the solution here must be to strengthen the role and rights of trade unions in representing their members on all workplace issues, including algorithmic management (see for example Ewing et al., 2016). Another, more targeted way of providing worker voice and influence over algorithmic management would be to augment workers’ and unions’ role in the preparation and auditing of impact assessments that are required by data protection law. These assessments must consider the risks to individuals’ rights and identify measures to address these risks (GDPR, Article 35), but there is no requirement that they be made public or shared with unions. Giving unions a role in auditing and monitoring any impact assessments would be a concrete way to enhance workers’ influence over algorithmic management.
More novel from a labour law perspective is the need to establish a mechanism enabling workers to exercise voice over algorithmic management technologies at an earlier stage, when they are being developed and ‘trained’ by third-party companies. This is necessary because of the division of responsibility for workplace decisions between employers and developers that is inherent in algorithmic management. Rather than legal dispersal of responsibility through outsourcing or agency usage, algorithmic management diffuses responsibility for management discussions across employers, developers and software intermediaries (Adams-Prassl, 2019). Although unthinkable in the United Kingdom’s current political and regulatory climate, from a labour protection point of view the best solution to this would be to impose licensing requirements on management technologies. Independent expertise could be used to verify that the software is compliant with local labour laws and unions could be granted rights to oversee testing and trials before programs are permitted to be used freely (De Stefano, 2020).
Although not a solution to workers’ inability to exercise voice and influence companies developing algorithmic management tools, an alternative way of incentivising tech companies to take workers’ interests into account would be to impose joint liability on them for any breaches of employment law that arise from the use of algorithmic management technologies that they develop or market. This is analogous to arguments in favour of joint liability in the context of other triangular working arrangements in which employers’ functions are spread across multiple agents, such as in the case of agency workers (Davidov, 2004). A joint liability model is not alien to the regulation of technology. Under Article 82 GDPR, a person who has suffered damage as a result of a data breach has the right to compensation from the data controller or the data processor. Where more than one party is responsible for damage caused by processing, each controller or processor is held liable for the entire damage, to ensure effective compensation (Article 82(4)). The only way to avoid liability is to prove no responsibility for the event that gave rise to the damage. A similar model should be adopted to regulate and attribute responsibility between employers who implement management packages and their developers. If there is found to be a breach of a worker’s labour rights – for example, because the manner in which their contract was terminated was unfair or they were not paid the minimum wage – liability would be imposed on both parties. The only way to avoid that liability would be to demonstrate that one actor took all reasonable steps to prevent the breach from occurring and held no responsibility for the breach. Joint liability would incentivise developers to design packages that fit within the jurisdictional context that they will be used in, rather than considering purely how work can be optimised and efficiencies made in an acontextual bubble. It would be difficult to shift liability, for example, if a developer could not show an understanding of the employer’s obligations in a particular jurisdiction and how they sought to meet those responsibilities in the management package.
Finally, although the focus here is on frameworks securing worker voice over algorithmic management, an adequate regulatory response to algorithmic management must also include other forms of regulation that run parallel with and support these mechanisms. These will be particularly important in the United Kingdom, where the low union density means that collective bargaining will not be effective in many workplaces and sectors. Even without this problem, however, it would remain important for legislation to set minimum general standards of decent work in the age of AI. This means ensuring that the law provides protection against biased or discriminatory algorithms, as well as providing redress against decisions and dismissals that are unfair or infringe workers’ human rights. In addition, there will also need to be prohibitions on certain uses of technology in the workplace and categories of algorithmic management where the impact on workers may be so severe that they must be regarded as ‘automatically unfair uses of technology’, analogous to automatically unfair categories of dismissal that cannot ever be justified. Leading candidates for such ‘automatically unfair’ uses of technology that should be banned in workplaces include particularly intrusive surveillance tools such as automated facial recognition, biometric and emotional tracking devices, and any requirement for employees to have technological implants.
Conclusion
Algorithmic management presents new challenges for worker voice and participation in managerial processes. It has been argued here that a multi-pronged approach is necessary to achieve the goals of effective voice and decent work in the age of AI. Such an approach would compensate for the informational and power asymmetries that are accentuated by algorithmic management and facilitate input by workers and trade unions throughout the lifecycle of these technologies; from their design, development and testing through to their adoption and implementation. Much remains to be done, however, to identify and design effective regulatory frameworks to counter the threat posed by algorithmic management to the future of decent work.
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
