Abstract
Employers increasingly use digital technologies in the workplace to capture and analyze worker data, electronically monitor their workers, and manage them using algorithms. In this article, the authors analyze employers’ use of data-driven systems in a diverse set of industries and identify a range of potential harms to workers, including bias and discrimination, de-skilling, unsafe work speeds, and loss of autonomy and dignity. In light of the current absence of regulation or oversight, the authors argue that workers deserve a robust set of 21st-century labor standards regarding digital technologies. They lay out a detailed public policy framework that establishes worker rights and employer responsibilities to ensure that the data-driven workplace benefits, rather than harms, workers.
Across the country, employers are increasingly using data and algorithms in ways that can have profound consequences for wages, working conditions, race and gender equity, and worker power. How employers use digital workplace technologies is often not obvious or even visible to workers or policymakers. 1 For example, hiring software is generating scores of job applicants based on their tone of voice and word choices captured during video interviews. Employers are using algorithms to predict whether workers will quit or become pregnant or try to organize a union, affecting decisions about job assignment and promotion. Call center technologies are analyzing customer calls and nudging workers in real time to adjust their behavior. On-demand grocery platforms are monitoring workers and calculating metrics on their speed as they fill shopping lists.
In these and many other examples, employers’ business operations and decisions are informed by near-constant collection and analysis of worker data. The COVID-19 pandemic exacerbated this trend, with workers experiencing more invasive forms of monitoring, both inside the workplace (such as tracking social distancing behaviors) and in remote workers’ homes (such as keystroke tracking). In particular, Amazon’s warehouse and delivery workers took the brunt of skyrocketing demands for delivered goods, with constant surveillance and productivity tracking software pushing the pace of work to an alarming rate and putting workers’ health at risk.
Unfortunately, the data-driven workplace is currently operating in a regulatory vacuum in which workers lack protection from the impacts of data-driven technologies. As a result, workers have little say about what data are collected on them, how employers combine that data with algorithms to make decisions about them, and how these systems affect their jobs and livelihoods. The lack of regulation leads to strong incentives for employers to use digital technologies at will, in ways that can directly or indirectly harm workers. Of particular concern is that workers of color, women, and immigrants can face direct discrimination via systemic biases embedded in these technologies and are also most likely to work in occupations at the front lines of experimentation with artificial intelligence.
Without a robust public policy response, it is not difficult to imagine a future in which workers labor in digital sweatshops, micro-managed with no autonomy and under constant pressure to do more; this is already the reality for some workers. But the United States is years behind the European Union and other countries in regulating data-driven technologies. To the extent that the policy discussion has started, it largely focuses on consumers and their data privacy rights and very rarely includes workers. The discussion of technology rights needs to extend into the workplace and explicitly confront the fundamental imbalance in power between workers and the firms they work for.
In this article, we argue for a set of new, 21st-century labor standards to establish worker rights and employer responsibilities for the data-driven workplace. In particular, we propose a comprehensive set of policy principles that can help build a robust regulation regime to ensure that new digital technologies do not harm workers. The principles lay out a vision for labor standards that give workers rights with respect to their data; hold employers responsible for harms caused by their systems; establish guardrails on how employers monitor workers and use algorithms; ensure workers’ rights to organize around technology; guard against discrimination; and establish a strong enforcement regime for worker recourse. We view these rights and protections as the regulatory bedrock upon which to build an economy in which workers are full participants in technological change. Without these rights, the default mode of technology development and deployment will bend toward employers, shifting risk to workers and exacerbating inequality in our society.
Applications and Harms of Digital Workplace Technologies
Digital Technologies in the Workplace
The revolution in big data and artificial intelligence of the past two decades has yielded an array of new tools employers can use to capture and analyze worker data, electronically monitor their workers, and manage them using algorithms (see Kellogg, Valentine, and Christin 2020; Bailey 2022). To be clear, data analytics applied to work processes is not new; for example, Taylorism and scientific management formed the linchpin of mass industrialization (Cappelli 2020). But today, we are seeing employers develop new business models and methods of worker control and productivity management using much more powerful digital systems that have the potential to substantially affect worker outcomes (Zuboff 2019).
We are at just the beginning of both the development and the adoption of these digital technologies, which means the empirical footprint for conducting research is still thin. Moreover, the lack of regulatory oversight and disclosure requirements means that the use of these systems is frequently hidden from workers, policymakers, and researchers (Citron and Pasquale 2014). Nevertheless, we are already able to identify key workplace technologies, common uses by employers, and actual and potential effects on workers (Adler-Bell and Miller 2018; Bogen and Rieke 2018; Kresge 2020; Nguyen 2021). 2
Data-driven technologies can range from the mundane, such as payroll processing systems and résumé-scanning technologies that identify keywords, to the incredibly complex, such as computer vision detection of human activities and natural language processing of worker conversations (NASEM 2017). Here, we focus on three major applications of digital technologies in the workplace that are currently the center of public policy debates and legislative proposals in the United States: worker data collection, electronic monitoring, and algorithmic management. Although we discuss each in turn, in practice, multiple applications are often integrated into a single technological system.
To start, employers can collect and then analyze an extensive array of data about their workers (Ball 2010; Kresge 2020). Some of it is gathered in the workplace, such as computer activity, location in the building, customer ratings, and smartphone app interactions. Other data, such as social media activity and credit reports, are bought from third parties. With the advent of wearable sensors, employers have partnered with technology vendors and wellness programs to collect more personal biometric and health and wellness data (Ajunwa, Crawford, and Schultz 2017; Hull and Pasquale 2018). Methods of data collection range from active strategies, such as directly surveying workers (and customers) to passive strategies, such as microphones embedded in worker badges. Employers may collect worker data themselves, and they may also contract with an emerging ecosystem of businesses engaged in collecting, processing, and selling worker data.
The growth of people analytics within the human resources (HR) field—using worker data to generate statistical insights to inform managerial decisions—reflects employers’ increasing focus on worker data collection (Bodie, Cherry, McCormick, and Tang 2017; Giermindl et al. 2021). Garr and Mehrotra (2020) identified more than 120 people-analytics vendors as of 2020 that specialize in collecting, aggregating, and analyzing worker data to generate insights about workers. Many vendors integrate and analyze data from technology companies that offer services for core HR functions (e.g., employee surveys, performance evaluation, attendance tracking, and personality and skills assessments), work technologies (e.g., email, calendars, and customer relations management systems), job applicant tracking systems, training platforms, and others.
Electronic monitoring is a distinct form of data collection that entails extensive and often continuous, real-time monitoring of worker behaviors and actions. While not new, electronic monitoring has become more common with the advent of passive data collection technologies such as sensors embedded in workplace equipment and wearables (e.g., wristbands) that can capture a wide range of data on worker location, activities, and interactions with co-workers (Ajunwa et al. 2017; Collier 2018). Likewise, systems that log keystrokes and capture screenshots enable employers to monitor computer and internet activity. Employers also use GPS technologies embedded in vehicles or in workers’ personal smartphones to track their locations while out in the field. More recently, sophisticated monitoring systems based on advances in computer vision are being used to analyze in real-time video captured by workplace cameras (Matsakis 2019; Yuganthini et al. 2021).
The home health care industry offers a vivid example of workplace monitoring and task recording technology. Since 2016, states have been required to implement a system of electronic visit verification (EVV) for home care services reimbursed under Medicaid (Mateescu 2021; Litwin 2022). EVV implementation varies widely across states and in its degree of invasiveness for workers. In California, the home care worker is required to enter only relevant visit data into an online portal. Other states issue handheld devices, which the worker uses to clock in and out and to record service data. Some states require workers to install an app on their smartphones that tracks their location in real time. In the most invasive version of EVV, states may also opt to include biometric recognition systems, such as facial or voice recognition, to verify the identity of the home care worker or recipient (Metcalf 2018).
Similarly, in the call center industry, basic audio recordings of calls are increasingly being replaced by much more advanced monitoring and performance management systems (Doellgast 2022). One call center technology vendor, Cogito, offers a system that analyzes conversations between call center employees and customers and provides real-time behavioral guidance to workers, coaching them to express more empathy, for example (Simonite 2018; Dzieza 2020). A supervisor dashboard provides a “customer experience score” based on the worker’s performance metrics such as call efficiency and sales conversions.
Finally, employers use algorithmic management for functions such as workforce scheduling, coordination, and direction of worker activities (Lee, Kusbit, Metsky, and Dabbish 2015; Rosenblat and Stark 2016; Jarrahi et al. 2021). In the context of computers, an algorithm is a set of rules in programming code for solving a problem or performing a task based on input data (Cormen, Leiserson, Rivest, and Stein 2009). The basic version of an algorithm resembles a recipe: The algorithm is simply a set of instructions written by the programmer for how a computer should transform data into an output. But recent advancements in artificial intelligence research have resulted in much more complex algorithms that enable computer systems to learn, reason, and interact with humans in their environment (Rahwan et al. 2019). Crucially, these complex algorithms—also referred to as machine learning, deep learning, or neural networks—often become opaque to human programmers (let alone workers or policymakers) as the algorithms adapt to new data (Burrell 2016; Dourish 2016).
In workplace applications, algorithms transform input data into outputs that can take the form of everything from promotion recommendations and instructions for delivery drivers, to chatbots and semi-autonomous service robots (NASEM 2017). Probably the best-known example of algorithmic management is scheduling optimization systems. These systems draw on a variety of data to predict customer demand, make decisions about the most efficient workforce schedule, and generate worker schedules that can adjust in real time as new data become available. Some systems, such as Percolata, monitor and measure in-store customer traffic and worker activities (Tanaka et al. 2016). The Percolata system then estimates sales productivity scores for each worker and creates schedules based on those scores. Note that scheduling optimization systems can be programmed to incorporate worker preferences or to prevent erratic schedules.
Another example can be found in the building services industry, which is increasingly adopting workforce management systems that rely on mobile apps to manage and track frontline workers. In the janitorial services sector, these systems allow workers to view pay stubs, clock in and out for shifts, and communicate with supervisors. Some systems include GPS to track workers’ presence on a job site and detect rule violations such as late check-ins. More advanced systems rely on algorithms to optimize cleaning routes and assign job tasks to workers (e.g., Janitorial Manager 2022). In the building security industry, complex algorithms analyze data collected through closed-circuit television (CCTV) video cameras and building sensors and automate decisions about when to deploy frontline security guards (Lasky 2019).
Potential Harms to Workers
Although much of the policy discussion about data-driven technologies is currently focused on privacy concerns, the potential harms of new technologies extend much farther than privacy (Giermindl et al. 2021; Slaughter, Kopec, and Batal 2021). In the workplace, researchers are only beginning to identify the full range of actual and possible negative impacts on workers from the diverse set of data-driven technologies being introduced by employers. Some of these harms stem from technology design decisions, but often, they derive from employers’ decisions about how to use the technology—such as when and why to use electronic monitoring, which management decisions to automate, or which productivity benchmarks to rate workers against (Cappelli 2020).
To be fair, data-driven technologies can be used to benefit workers. For example, pushed by unions, the hotel industry has begun to introduce location-identifying “panic buttons” that can be activated by house cleaners to protect them from sexual assault and harassment (Eidelson 2017). Some safety monitoring systems in the construction industry track workers as they walk through job sites and predict in real time the risk of being hit by heavy machinery. If the system determines a likely accident, it will alert the worker through vibrations on a wristband and disable the machinery (Oliver 2020). Continuing to develop these types of beneficial applications of data-driven technologies is an important project for unions, employers, and technology developers.
In this article, however, our focus is on identifying the baseline laws and regulations that are needed to prevent negative effects on workers. The following is an initial set of harms identified by researchers and worker advocates and reflected in proposed legislation around workplace technologies.
Work Intensification and Health and Safety Harms
One of the key applications of data-driven technologies is to monitor and manage worker productivity, which is not harmful in and of itself. But when an employer uses electronic monitoring and algorithms to minutely track and relentlessly push workers to achieve greater productivity, negative effects can quickly make themselves felt (Schaupp 2021). In warehouses and distribution centers, handheld or wearable product barcode scanners enable firms to track workers’ scan rates and errors and to send them performance notifications to increase their pace or accuracy (Gutelius and Theodore 2022). When combined with managerial systems that rely on extensive data analytics on worker performance to penalize or fire workers, these systems can often drive workers to increase their pace of work (Dzieza 2020; also see Carré and Tilly 2022).
Work intensification can have direct impacts on workers’ physical health and safety, as evidenced in the high injury rates that have been documented in Amazon’s warehouses (Ockenfels-Martinez and Boparai 2021). More generally, multiple studies have documented the negative, often stress-related health effects of intense levels of electronic monitoring (Ravid et al. 2022; also see O’Brady and Doellgast 2021) and have linked job-related stress to ulcers, cardiovascular disorders, and other negative physical health consequences (Nieuwenhuijsen, Bruinvels, and Frings-Dresen 2010).
Another illustration comes from the public sector, where efforts to streamline and improve access to governmental services can also lead to work intensification and burnout. For example, to keep up with the growing volume of benefit applications, some government agencies have turned to chatbots that use natural language processing technology to answer simple questions or to help people navigate applications (Condon 2019). Other systems automatically process and review digital benefit program applications (Chaney 2020). Given the large volume of work, these systems may not reduce jobs, but instead result in workers handling more complex calls, which has the potential to lead to work intensification and burnout.
De-skilling and Job Loss
Employers can use data-driven technologies to routinize jobs and break them into discrete simplified tasks, accompanied by measuring and monitoring of performance. While the employer’s main goal may be to increase efficiency, the result for workers can be job de-skilling, a reduced scope of work, and increased repetition (Schaupp 2021; also see Levy and Barocas 2018). Algorithmic systems designed to manage workers can separate work tasks, shifting decision-making to programmers and computers while leaving task execution to the frontline worker—effectively turning them into a human machine that implements tasks dictated by a machine (Cappelli 2020; Jarrahi et al. 2021).
For example, in warehouses, voice-directed picking systems include headsets that give workers step-by-step instructions on how to navigate the facility and which items to pick, and the system requires verbal confirmation of task completion (Gautié, Jaehrling, and Perez 2021). Home care workers have had similar experiences with EVV systems; researchers report that EVV has routinized and de-skilled work tasks and enabled employers to shift work to contingent workers (Mateescu 2021). In the transportation industry, truck and delivery drivers are subject to extensive electronic monitoring through sensors in trucks, dashcams, on-board recorders, and fleet management systems. These systems enable managers to exert fine-grained control over workers by setting precise routes for drivers to follow, as well as setting metrics to evaluate driver performance and challenge workers’ accounts of driving conditions (Levy 2015; Holland et al. 2017).
In some cases, efforts to automate as many work tasks as possible can lead to repetitive jobs focused solely on supporting and training technical systems (Jarrahi et al. 2021). For example, hospitals are increasingly using semi-autonomous robots that transport materials or clean floors, relying on sensors and algorithms to navigate their physical environment (Elish 2016). Workers who previously performed those tasks are relegated to supporting the robots to enable them to function in the complex hospital environment. Task standardization can also lead to partial or wholesale job automation, since the task data gathered in real time from workers can be used to train robots or algorithms to eventually take over (Schaupp 2021). For example, algorithms used in self-driving trucks have been trained using data from hours of monitoring truck drivers (Plus 2021; Viscelli 2022).
Bias and Discrimination
To date, the best-documented harm to workers from data-driven technologies is bias and discrimination based on race, gender, age, disability, and other categories, especially in hiring software used by employers to partially or wholly automate the recruitment, screening, and evaluation of job candidates (Barocas and Selbst 2016; Kim 2017; Bogen and Rieke 2018; Kim and Bodie 2021). The classic scenario is a hiring algorithm that is trained to look for job candidate characteristics that match a company’s current workforce, thereby, intentionally or not, replicating the demographics—often white and male—of that workforce (Ajunwa 2020). Moreover, vendors of hiring technology often integrate pre-employment background screening into their systems to conduct criminal background checks, mine job candidates’ social media accounts, and generate automated scores or recommendations about job candidates (Gilman 2020). Such background check reports are often plagued with errors, which researchers estimate harm thousands of job candidates each year (Nelson 2019). 3
Data-driven technologies can also facilitate discrimination by translating racially or otherwise biased customer ratings into worker performance metrics (Rosenblat, Barocas, Levy, and Hwang 2016). Unfortunately, service workers in multiple industries, such as retail, call centers, and ride-share platforms, are increasingly evaluated with customer ratings (Rosenblat and Stark 2016; Wang 2016; Stark and Levy 2018). In the restaurant industry, for example, self-ordering tablets have become ubiquitous and often prompt customers to fill out a satisfaction survey, which can then be converted into a score that employers use to evaluate workers (O’Donovan 2018). Similarly, in the home care industry, marketplace apps enable clients to find and select service providers from a list of worker profiles that display workers’ performance metrics, including customer ratings (Ticona, Mateescu, and Rosenblat 2018). These ratings have a significant impact on which workers will be featured in customers’ searches, and therefore on their likelihood of finding work (Rosenblat et al. 2016).
Finally, women and workers of color may be more likely to experience harm from data-driven technologies because they are more likely to work in low-wage sectors where experimentation with invasive monitoring or algorithmic management appears to be more common (e.g., Vargas 2017). To date, the role of occupational segregation in exacerbating race and gender inequality in exposure to harmful technologies has received less attention, and it represents an important topic for future research.
Contingent Work
As new technologies enable remote monitoring and management of workers, the ability for employers to outsource work to subcontractors, staffing agencies, or platform-based work increases—and with it, the increased likelihood of misclassifying workers as independent contractors. Outsourcing allows employers to avoid the costs of employing workers directly, such as providing workers compensation coverage or health insurance benefits (Weil 2019; Rogers 2020). Meanwhile, workers who depend on platform-based income are excluded from workplace protections and bear the brunt of job insecurity. And even for traditional W-2 workers, new technologies such as automated scheduling software can result in highly variable, unpredictable, on-call schedules for workers (Gleason and Lambert 2014; Kantor 2014).
One of the most substantial technological changes in the grocery industry over the past several years has been the growth of order fulfillment and food delivery services, often provided through on-demand labor platforms that enable high levels of managerial control over worker behavior and performance (Griesbach, Reich, Elliott-Negri, and Milkman 2019; Benner and Mason 2022; Carré and Tilly 2022). For example, Instacart tracks and generates metrics on workers’ accuracy, speed in fulfilling orders, degree to which they follow scripted language in chat conversations with customers, as well as their customer ratings. Workers are penalized for not meeting those metrics, which can result in firing or removal from the platform (Bhuiyan 2020). Another grocery delivery app, Shipt, translates performance metrics into an “effort-based” pay algorithm that obscures how pay is calculated and has been shown to distribute pay inequitably among workers (Lyons 2020).
Similarly, in the transportation industry, ride-share platform companies such as Uber and Lyft rely on extensive monitoring through their apps, enabling the companies to collect data on worker behaviors and to manage workers (typically misclassified as independent contractors) from afar (Rosenblat and Stark 2016). The platforms set the price of the service, receive a percentage of the transaction, and can penalize drivers for canceling or declining dispatches or for poor customer ratings. The impact of this model on wages can be substantial: Reich (2020) estimated that ride-share drivers’ earnings would be 30% higher if they were classified as employees rather than independent contractors.
Suppression of the Right to Organize
Researchers have not yet been able to systematically investigate whether or how employers are using digital technologies to suppress worker organizing. But we do have several documented examples of employers using surveillance technologies to identify workers who are trying to organize a union, as well as predictive algorithms that data-mine social media to identify workers who might be likely to try to organize one (Vogel 2021). For example, Berfield (2015) described how Walmart monitored workers’ social media accounts to identify organizing efforts. Similarly, companies that design hiring systems can incorporate methods to screen out workers who are likely to be sympathetic to unions (Kessler 2020). An example is HireRight, a technology company popular in the retail industry that partnered with a vendor specializing in mining social media data to predict the likelihood of a given candidate becoming a whistleblower (HireRight 2020). Similarly, Upturn recently conducted an analysis of personality assessment tests in automated hiring systems and identified several questions that could effectively screen out pro-union applicants (Rieke, Janardan, Hsu, and Duarte 2021; also see Sullivan 2009). These types of attempts to identify organizing activity are, in and of themselves, an intrusion on the right to organize, but especially so when employers then take steps to stop the organizing by firing or otherwise intimidating workers (Garden 2018).
Loss of Privacy
Workers have significant privacy concerns in their workplaces. Electronic monitoring, for example, can easily stray outside of the workplace through systems that scan social media activity or apps downloaded on workers’ phones that track GPS location data regardless of whether they are on the job (Ajunwa et al. 2017). The risk is that this type of intrusive surveillance uncovers information about workers (e.g., their religion or sexuality) that is intensely private and not relevant to work performance. Such intrusions are especially likely for the growing number of people who are working remotely from their homes, given the broad data capture that is enabled by activity tracking software or video cameras (Ball 2021).
These concerns are particularly visible in the call center industry. For example, the global company Teleperformance uses webcams with a computer vision system that monitors call center workers; if the system detects a work rule violation (such as non-work use of a mobile phone), it can send real-time notifications to a manager (Walker 2021). We previously discussed the stress-related health effects of this intense level of electronic monitoring, but in addition, these systems are conducting often-invasive monitoring of home environments and family members who share the remote worker’s space—a common complaint reported by workers about these systems (Solon 2021).
Another example comes from the construction industry, which is increasingly adopting workforce management systems that rely on geofencing and geolocation technologies. Geofencing software works by setting a virtual boundary around an area using GPS coordinates and detects when a mobile device crosses that boundary. These highly invasive systems operate through apps installed on workers’ mobile phones and can automatically clock workers in and out as they enter and exit the job site, gathering their location histories along the way. Managers can access a dashboard with real-time tracking data and receive alerts, such as workers clocking in while outside of a designated job site (Maria and Burger 2016). GPS-based monitoring systems can easily extend employers’ ability to monitor workers well beyond the workplace and work activities.
Loss of Autonomy and Dignity
Finally, workers stand to lose their autonomy and dignity when employers use data-driven technologies to micromanage and monitor every activity and remove all room for discretion on the job. Although not as immediate as some of the harms discussed above, the danger of dehumanization at work in the era of artificial intelligence is already being reported by workers (Milner and Traub 2021). A visceral example is the potential public humiliation from having one’s productivity score compared to that of other workers on leaderboards (Lopez 2011). But, ultimately, the concern about loss of autonomy is about lost opportunities. Workers want and deserve to have agency in troubleshooting, innovating best practices, and learning new skills; the quashing of that very human desire is part of what is at stake in the debate about new technology.
For example, in the hospitality industry, hotels are increasingly adopting service optimization systems. When guests check out of their room or request services, the system automatically delegates the task to a worker based on their proximity or workload. Through a smartphone or tablet, workers receive notifications and an ordered task list, which can change throughout the day. These systems can lead to incoherent task prioritization, unrealistic productivity expectations, work intensification, and importantly, a significant loss in autonomy and agency for the workers (Reyes 2018).
Similarly, some government agencies are using technologies that automate decision-making for social services, such as identifying priorities for child protective services (Hurley 2018). Scholars and advocates are concerned about the potential for algorithmic harms against the public, especially in communities of color (Eubanks 2018). But a growing body of research also points to potential risks that these systems can pose for workers, such as loss of discretion in decision-making and being held responsible for negative outcomes for clients (Keddell 2019).
Unfair treatment deserves special mention in our discussion of harms to dignity. We previously mentioned bias and discrimination, but new technologies are vulnerable to other forms of unfair outcomes. An important example is error in technological systems; when these systems fail (as they invariably do), workers may be unjustly blamed and held accountable (Elish 2016). More generally, we are only beginning to understand the potential for algorithms to generate unfair rankings or predictions about workers, or for monitoring systems to gather incomplete or misleading information about worker behavior. In an at-will system of employment law, workers currently have few avenues of recourse when employers make consequential decisions, such as hiring or firing, based on such systems (Gordon 2021).
Downstream Effects on Wages and Economic Mobility
Beyond the immediate harms that are emerging from data-driven technologies lies the potential for downstream impacts on workers’ wages and economic mobility (Acemoglu and Restrepo 2019). An obvious example is the loss in wages when a job candidate is unfairly disqualified by an automated hiring system. Wage theft is another example, as when intense productivity quotas discourage workers from taking the paid rest breaks to which they are legally entitled (Tippett, Alexander, and Eigen 2017). But other effects on wages can be more indirect. For example, when a job is de-skilled and routinized by advanced technologies, it is effectively turned into a low-wage, dead-end job (also potentially contributing to greater income polarization if the number of high-tech jobs is growing at the same time). Similarly, an algorithmic management system may make recommendations to an employer about promotions in ways that hurt the long-term career mobility of a worker. Data-driven technologies can also indirectly serve as gatekeeper to the labor market, if qualified workers have limited technological literacy or lack access to broadband internet (Gonzales 2016).
Many other scenarios exist; the task ahead for researchers will be to analyze both immediate and longer-term harms to workers, as firms’ technology adoption accelerates in the coming years and as policymakers increasingly require rigorous documentation to guide their efforts to protect workers.
A Framework for Worker Technology Rights
The Regulatory Vacuum
The emerging suite of data-driven technologies in the workplace—and the early signs of potential harms to worker welfare—raise critical questions. Will these technologies be used to benefit and empower workers, help them thrive in their jobs, and bring greater equity to the workplace? Or will they be used to de-skill workers, extract ever more labor, increase race and gender inequality, and suppress the right to organize? Who is going to be at the table when these decisions are made, and, in particular, what role will workers play? In other words, who is going to govern technology? And what values will we as a society choose to prioritize in that governance?
The cornerstone of governing workplace technologies will be laws and regulations, as well as collective bargaining agreements in unionized workplaces. But currently in the United States, employers are introducing untested data-driven technologies with almost no regulation or oversight. Workers largely do not have the right to know what data are being gathered on them or whether the data are being shared with others. They do not have the right to review or correct the data. Employers are not required to notify workers about any electronic monitoring or algorithms they are basing decisions on, and workers do not have the right to challenge those decisions.
The United States lags significantly behind the European Union and other countries in regulating data-driven technologies. For example, the European Union was a leader in adopting the General Data Protection Regulation (GDPR), a wide-ranging data protection law regulating the collection and processing of data from individuals (including workers) that has become the global model for many other countries (European Parliament and Council of the European Union 2016). The European Union is also in the process of finalizing a comprehensive artificial intelligence law that will be the first of its kind (European Commission 2021) and is considering a directive on platform work that includes regulation of algorithmic management. 4 In the United States, by contrast, only a few narrow data privacy laws have been passed at the state level, all focused on consumers. And while recently we have seen multiple privacy bills introduced at the federal level, the timeline to actual passage is highly uncertain (Klosowski 2021).
Meanwhile, legal analyses of existing employment and labor laws have concluded they are wholly inadequate to the task of protecting workers in the data-driven workplace (e.g., Barocas and Selbst 2016; Ajunwa et al. 2017; Kim 2017; Bales and Stone 2020; Rogers 2020). In some cases, new laws will need to be written from scratch, for example, to establish a general right to worker privacy or to establish guardrails on the use of algorithms (Wachter and Mittelstadt 2019). 5 Similarly, employers’ electronic monitoring of workers is largely unregulated in federal law. Some states have scattered privacy protections, but these are typically focused on specific types of data (e.g., biometrics) or simply institute a weak notice and consent model (e.g., when employers monitor worker communications). In other cases, existing laws need substantial updating for the data-driven workplace, which is the case for anti-discrimination laws if they are to meet the challenge of addressing discriminatory harms stemming from algorithmic hiring and promotion tools (Kim and Bodie 2021). Similarly, our health and safety laws do not have sufficient standards to protect workers from the psychological stress, repetitive motion, and fatigue-related injuries that can result from productivity monitoring systems (Scherer and Brown 2021).
Toward a Policy Framework: Principles
For the majority of workers who are not members of unions, the profound asymmetry of power in the US workplace means they have little to no say over the policies and decisions that affect them in their day-to-day work lives. In particular, notions of consent to new technologies or the ability to find better conditions elsewhere are not meaningful or available to low-wage workers, women, and workers of color, who all face a labor market that is often dominated by employers competing on the basis of cutting labor costs. Employment and labor laws have long attempted to balance this asymmetry of power by instituting baseline labor standards and giving workers a mechanism for voice; those laws need to be strengthened and updated for the 21st-century workplace and its technologies. Technology is not inherently good or bad, but neither is it neutral; the role of workplace regulation is to ensure that technologies serve and respond to workers’ interests and to prevent negative impacts.
In what follows, we outline a set of nine policy principles that can help build a robust regulation regime establishing worker rights and employer responsibilities for the data-driven workplace. 6 They include regulations of the technologies themselves as well as rules about when, how, and for what purpose employers can use them in the workplace. We argue that new labor standards for digital technologies should first and foremost be embedded in employment and labor laws. Consumer-focused laws are insufficient for fully protecting workers because they are largely focused on privacy—and as described above, workers’ concerns about new technologies extend far beyond privacy to include impacts on wages, health and safety, working conditions, job stability, and race and gender equity.
1. Goals and Scope
The rapid pace of innovation in the use of data collection, electronic monitoring, and algorithms affects every stage of the employment life cycle and requires broad, ambitious standards set in law. Full coverage of both workers and employers should be the governing principle, as should attention to the full range of potential harms for workers. Specifically: Public policy should establish new rights and protections to ensure worker dignity and welfare in the use of data-driven technologies in the workplace. These standards should give workers agency over new technologies, promote health and safety, protect the right to organize, and guard against discrimination and other negative impacts. All workers deserve protection. New rights and protections should cover all workers, including employees, independent contractors, job applicants, and remote workers. Representatives from unions or other worker organizations should be able to access these rights and protections on behalf of workers. All employers should be held to these standards. Employers’ obligations should apply to their labor subcontractors, as well as to vendors that provide technology or technology services. All employment-related decisions that are made or assisted by data-driven technologies should be regulated. Employer decisions based on digital technologies should be regulated whenever they impact workers, including effects on earnings, benefits, and hours; race and gender equity; hiring, firing, promotion, discipline, and performance evaluation; job assignments, job content, and productivity requirements; workplace health and safety; and the right to organize.
2. Disclosure
Full disclosure and transparency are prerequisites for effective regulation. Currently, however, the biggest obstacle to regulating data-driven technologies is that their use is largely hidden from both policymakers and workers, in what has been called the “black box” problem of digital technologies (Citron and Pasquale 2014). Without disclosure, job applicants will not know why a hiring algorithm rejected their résumé, truck drivers will not know when they are being tracked by GPS, and workers will not realize their health plan data are being sold. Therefore: Employers should provide notice to workers in a clear and accessible way regarding all data-driven technologies in the workplace. Notices should include an understandable description of the technology, the types of data being collected, and the rights and protections available to workers. Employers should also be required to file notices with the relevant regulatory agencies (e.g., those enforcing wage and hour, health and safety, and anti-discrimination laws). Employers should notify workers when they use electronic monitoring. This notification should include a description of which activities will be monitored, the method of monitoring, the data that will be gathered, and the purpose for monitoring and why it is necessary. Notice should document how employment-related decisions could be affected. Employers should notify workers when they use algorithms that might affect workers’ jobs or working conditions. This notification should include an accessible description of the algorithm, its purpose, the data it draws on, the type of outputs it generates, and how the employer will use those outputs in its decision-making.
3. Worker Data
Employers can collect or buy vast amounts of data on their workers, and they can share it or sell it without restriction. But, just like consumers, workers deserve legal standards reining in employers’ collection and use of their data: Employers should collect worker data only when it is necessary and essential for workers to do their jobs. Employers should minimize their collection of worker data, defined broadly to include personal identity information, biometric and health information, any data related to workplace activities (including productivity data and algorithmic inferences), and online information including social media activity. Unlimited collection of worker data unnecessarily exposes workers to risk, including data breaches and employers’ misuse of personal information. Workers should have the right to access, correct, and download their data. Workers should receive all relevant information regarding their data, including why and how it was collected, if it was inferred about the worker, and whether it was used to inform an employment-related decision. Employers should be responsible for timely correction of any inaccurate data. Worker data should be safeguarded and protected from misuse. In particular, employers should not be allowed to sell or license worker data to third parties under any circumstances—otherwise the incentives to violate worker privacy by selling worker data for monetary gain are too high. Individual workers’ biometric and other health data should never be shared unless required by law.
4. Use of Electronic Monitoring
Electronic monitoring is a highly invasive technology because it allows for real-time and continuous capture of worker activities and behavior. As a result, the potential for misuse of electronic monitoring by employers is significant—such as using biased or incomplete monitoring evidence to discipline someone. Therefore: Employers should use electronic monitoring only for narrow purposes that do not harm workers. Electronic monitoring should be used only if strictly necessary to enable core business tasks, to protect the safety of workers, or when needed to comply with legal obligations. Monitoring should affect the smallest number of workers possible, should collect the least amount of data necessary, and should be the least invasive means for accomplishing its purpose (Bottomley 2020). Productivity monitoring in particular should be subject to higher scrutiny and reviewed by regulatory agencies overseeing workplace health and safety to ensure it is not used to increase work speeds to dangerous levels. Employers should respect workers’ privacy in using electronic monitoring. Intrusive surveillance in the workplace can capture information about workers that is private and not relevant to performance. Workers should not be monitored in the break room, in sensitive areas such as the restroom, or when off duty. Any GPS or other tracking devices should be disabled when the worker is off the job. Electronic monitoring should not use high-risk technologies such as facial recognition. Some new monitoring technologies are too risky to introduce in the workplace; for example, facial-recognition systems have been documented to have high error rates and racial bias (Buolamwini and Gebru 2018). Employers should be prohibited from incorporating unproven, questionable, or particularly invasive technologies into their electronic monitoring systems. Electronic monitoring should not be used as a substitute for human decision-making. Even in the best cases, electronic monitoring systems can capture only a partial picture of a given event or set of actions; in the worst cases, that picture is misleading or wrong. Employers should therefore be prohibited from relying exclusively or even mainly on data from electronic monitoring when making consequential decisions such as hiring, firing, discipline, or promotion. Instead, employers should be required to conduct independent, human-driven assessments of workers based on other information sources. Workers should be given full documentation when an employer makes a consequential decision informed by electronic monitoring. Workers should also be able to challenge that decision.
5. Use of Algorithms
The explosion in algorithmic management tools creates significant risk for workers; many of these technologies are opaque, untested, and being used by employers with little attention to or understanding of their potential harms for workers. At the same time, the stakes for workers are high when decisions such as hiring and firing are being made about their work lives. Therefore: Employers should not use algorithms that harm workers’ health, safety, and well-being. Employers should be responsible for ensuring that any employment-related decisions assisted by an algorithm are fair, reasonable, and do not harm workers. Productivity algorithms in particular should be subject to higher scrutiny and reviewed for potential harms by regulatory agencies that oversee workplace health and safety. Employers should not use algorithms to make irrelevant or unfair predictions about workers. The marketplace has seen a spate of “snake oil” algorithms making what turn out to be questionable predictions about workers (Narayanan 2019). Employers should be prohibited from making predictions or inferences about a worker’s traits and behaviors that are unrelated to their job responsibilities. Similarly, employers should not be able to use algorithms to predict or make judgements about a worker’s emotion, personality, or health. Employers should not use high-risk algorithmic technologies such as facial recognition or expression analysis. Employers should be prohibited from using algorithms that incorporate unproven, questionable, or particularly invasive technologies. Algorithms should not be used as a substitute for human decision-making. The growing complexity of algorithmic systems means that even their developers may not understand how they arrive at conclusions—let alone the employers deploying these systems (Burrell 2016). Employers should therefore be prohibited from relying exclusively, or even mainly, on algorithms when making consequential decisions such as hiring, firing, discipline, or promotion. Instead, humans should have a substantial and meaningful role in the decision, drawing in other sources of information. Workers should be given full documentation when an employer makes a consequential decision assisted by an algorithm. Workers should also be able to challenge that decision.
6. Discrimination
Growing evidence suggests that data-driven technologies carry significant risks of replicating and exacerbating discrimination against workers on the basis of race, gender, age, disability, and other characteristics. The “black box” nature of many of these technologies—and their use for consequential decisions such as hiring and promotion—means that regulatory scrutiny needs to be especially high. The following is adapted from the Leadership Conference on Civil and Human Rights (2020), expanded to the full range of workplace applications: Data-driven technologies should not discriminate against workers based on protected characteristics. Policymakers should make clear that anti-discrimination laws apply to all workplace data-driven technologies. In particular, the use of data-driven technologies with a disparate impact should trigger the same level of scrutiny as any other discriminatory employment practice. Removing protected characteristics from data-driven technologies should not give employers a free pass. The fact that an employer does not use protected characteristics such as race or gender in its algorithm or data system does not mean that the technology cannot have a disparate impact. Employers should still be required to test for disparate impacts and mitigate any harms. Policymakers should update existing regulations on worker assessment tools. Data-driven technologies in worker assessment tools should measure only traits that have a logical and explainable relationship to the job at hand. They should not use mere correlation to make judgements, inferences, or predictions about a worker’s or job applicant’s ability to perform the job.
7. Organizing and Bargaining
Across the country, especially in low-wage industries, workers are increasingly voicing their frustration with excessive monitoring and algorithmic management in their workplaces. They should be able to organize around these issues without retaliation, and, when represented by unions, be able to bargain over them. Specifically: Labor organizations should have the right to bargain over employers’ use of data-driven technologies. Federal labor law requires employers to bargain with worker representatives over the terms and conditions of employment. Data collection, electronic monitoring, and algorithmic management all affect the terms and conditions of employment. Unions should have access to the information necessary to fully understand the nature, scope, and effects of data-driven technologies used by the employer, and the employer should be required to bargain in good faith over them (Bodie et al. 2017; Rogers 2020). Even when they are not represented by a union, workers should have the right to organize around the use of data-driven technologies in their workplace. When workers protest a company’s collection of their data, question the decisions made about them by algorithms, or seek to learn more about data practices, labor laws should be understood to protect this collective activity. Employers should not use digital technologies to identify, monitor, or punish workers for organizing. Monitoring workers who are engaging in organizing activities has long been held to violate the law for its chilling effects. Employers should not engage in surveillance of workers when they are meeting with union representatives or discussing workplace problems. Efforts to screen workers using electronic monitoring or predictive algorithms for their sympathy with unions should also be recognized as illegal.
8. Impact Assessments
The novel and inscrutable nature of many data-driven technologies means that their impacts on workers are not self-evident. But waiting to discover harms after an algorithm or data system has already been implemented is not fair to workers. These technologies should be thoroughly vetted and made safe for the workplace before they are introduced. Specifically: Data-driven technologies should be continuously evaluated and harms mitigated. Employers should be required to audit their technologies by conducting rigorous impact assessments, both prior to implementation and throughout the life cycle of the technology (Reisman et al. 2018). They should be required to address any risks that are identified and be held legally liable for any harms caused by their technologies. Employers should also be required to submit impact assessments to the relevant regulatory agencies, which should have the right to halt the use of harmful systems. Impact assessments should evaluate the full range of potential harms to workers. These include discrimination, harms to mental and physical health and safety, loss of privacy, and negative economic impacts. Workers should have a role in impact assessments and have the ability to challenge them. Workers have significant and useful knowledge about a company’s production processes and how technology actually works on the ground. They (and their unions) should be consulted in all stages of an impact assessment and be able to review and give feedback. They should also be able to dispute the final assessment with the relevant regulatory agencies.
9. Enforcement
Enforcement is the lifeblood of laws and regulations; without it, the promise of legal rights is hollow. The asymmetry of power and information between workers and employers in the use of data-driven technologies is pronounced, and the incentives for employers to misuse opaque technologies are strong. Therefore: Regulatory agencies should play a robust role in enforcing workplace technology standards. Workers should be able to submit complaints about employer non-compliance to the relevant regulatory agencies. In turn, those agencies should respond to each complaint, apply penalties when warranted, and initiate workplace-wide investigations when needed. When technologies are found to harm workers, agencies should have the authority to require that employers mitigate the harms or to halt the use of systems that cannot be made safe. Regulatory agencies should have the authority to establish additional rules and standards to respond to rapid developments in existing and new technologies introduced in the workplace. Workers should have a private right of action to sue employers for any violations of their technology rights and protections. Employers should also be prohibited from retaliating against workers for enforcing their rights.
Conclusion
We have argued that employers’ growing use of data-driven technologies in the workplace poses significant risks to workers and requires the creation of a new set of labor standards in employment and labor laws. These new standards must be bold, comprehensive, and continuously updated to respond to the rapidly changing terrain of workplace technologies and the potential harms workers face from them. 7
Legal rights and protections, however, will not be enough to ensure that technology benefits, rather than harms, workers. For example, workers should receive the training needed to grow with their jobs and participate fully in technological change. Government staff need the skills and adequate resources to provide oversight and enforcement. Public R & D funding should be leveraged and increased to incentivize the development of technology that works alongside and supports workers. The public sector itself must become a model for accountable technology adoption (Ada Lovelace Institute, AI Now, and Open Government Partnership 2021). And the United States must build a robust governance regime to regulate the designers, developers, and producers of artificial intelligence and other new technologies (Negrón 2021).
Ultimately, workers should fully participate in decisions on which technologies are developed, how they are used in the workplace, and how the resulting productivity gains are shared. This participation need not and should not be anti-innovation, because workers have a wealth of knowledge and experience to bring to the table. Dehumanization and automation are not the only path. With strong worker protections in place, new technology can be put in the service of creating a vibrant and productive economy built on living-wage jobs, safe workplaces, and race and gender equity.
Footnotes
Acknowledgements
We are grateful to the workers, unions, and other worker organizations who shared their experiences with us, and to the participants of a working group in California that contributed significant expertise to the policy principles presented here.
This article is part of an ongoing ILR Review Special Series on Novel Technologies at Work and part of the Review’s Policy Paper Series.
The research in this article was supported by grants from The James Irvine Foundation and the Ford Foundation. This article is based on the authors’ longer policy report, “Data and Algorithms at Work: The Case for Worker Technology Rights,” UC Berkeley Labor Center (November 2021).
For information regarding the data and/or computer programs used for this study, please address correspondence to
1
Throughout this article we use the terms “digital technologies” and “data-driven technologies” interchangeably when referencing the wide range of technologies that gather, process, analyze, and transform data into outputs such as rankings, predictions, decisions, and machine-based actions.
2
In this section, we synthesize recent findings in the research literature on technological change in the workplace, as well as our own ongoing research between 2018 and 2022. During this time, we conducted extensive secondary research on emerging workplace technologies, including analysis of technology vendor materials and patents. We also conducted dozens of interviews with technology and labor experts, covering current and emerging technologies in a wide range of industries, as well as several focus groups with workers. During the course of our policy discussions with unions and worker centers, we identified numerous technologies profiled in this article. Finally, we conducted participant observation in several dozen conferences and events with technologists, worker and social justice advocates, and academics.
3
Furthermore, given the well-documented racial bias in the criminal justice system, even accurate background checks can perpetuate racial discrimination and labor market exclusion (Alexander 2010;
).
4
5
6
We developed these principles through extensive policy research, analyzing and drawing on proposals and policy concepts developed by lawyers, academics, and worker advocates in the United States, Europe, and elsewhere (e.g., European Parliament and Council of the European Union 2016; Ajunwa et al. 2017; Alder-Bell and Miller 2018; Reisman, Schultz, Crawford, and Whittaker 2018; Georgetown Law Center on Privacy & Technology 2019; ACLU 2020; Milner and Traub 2021; Slaughter et al. 2021; UNI Global, n.d.). We also received significant input and feedback from worker advocates, privacy experts, and employment and labor law scholars, in part through an informal working group convened by the UC Berkeley Labor Center from 2019 to 2022.
