Abstract
The European Commission proposed a Directive on Platform Work at the end of 2021. While much attention has been placed on its effort to address misclassification of the employed as self-employed, it also contains ambitious provisions for the regulation of the algorithmic management prevalent on these platforms. Overall, these provisions are well-drafted, yet they require extra scrutiny in light of the fierce lobbying and resistance they will likely encounter in the legislative process, in implementation and in enforcement. In this article, we place the proposal in its sociotechnical context, drawing upon wide cross-disciplinary scholarship to identify a range of tensions, potential misinterpretations, and perversions that should be pre-empted and guarded against at the earliest possible stage. These include improvements to ex ante and ex post algorithmic transparency; identifying and strengthening the standard against which human reviewers of algorithmic decisions review; anticipating challenges of representation and organising in complex platform contexts; creating realistic ambitions for digital worker communication channels; and accountably monitoring and evaluating impacts on workers while limiting data collection. We encourage legislators and regulators at both European and national levels to act to fortify these provisions in the negotiation of the Directive, its potential transposition, and in its enforcement.
Keywords
Introduction
On 9 December 2021, the European Commission published a proposal for a Directive on improving working conditions in platform work—hereinafter the ‘Platform Work Directive Proposal’ or ‘PWD Proposal’ 1 —following from earlier commitments to improve the labour conditions of platform workers in the von der Leyen Commission Political Guidelines. 2 This article attempts to critique and suggest improvements to its provisions by analysing them in light of a cross-disciplinary range of literature concerning platforms, labour, digital technologies, and their regulation.
Beyond misclassification
A fundamental driver of platform work quality deficits is employment misclassification. Many persons performing platform work are required, in the terms and conditions to which they must agree before beginning work, to be classified as self-employed, not employees. Nevertheless, their work is often directly or indirectly managed in ways that many analysts have argued, and some Member States have judged, to be consistent not with self-employment but with an employment relationship. 3 However, the task of addressing this problem at EU level is complicated by the fact that the conditions determining the employment status of a working person vary between Member States, and the political challenges of reaching consensus on a unified, wider definition of employee to incorporate vulnerable self-employed individuals. 4 Seemingly realising the core of the misclassification problem to platform work quality challenges, the Commission proposes not to unify the definition of employment, but to empower those performing platform work with a rebuttable presumption of employment status. 5 Without changing national employment definitions, this initiative thus takes particular aim at the difficulties of obtaining information on and demonstrating work organisation in the platform economy. 6
However, at least some persons performing platform work are likely to be truly self-employed, and digital labour platforms’ procedures and characteristics drive work quality deficits even for those individuals. 7 Addressing the misclassification problem does not remove all the problems and imbalances in platform work. These include:
The Commission, aware of these issues and of their significance for persons performing platform work irrespective of their employment status, set out provisions to address them in Chapter III of the PWD Proposal, under the heading ‘algorithmic management.’ It is those provisions that this article focuses on analysing.
On the whole, these provisions are well-considered. The main broad criticism that can and has been levelled against them is that they risk giving algorithmic management the blessing of legislation without sufficient debate around whether or in what form it is or should be permissible at all. 14 However, in other ways, the Commission's drafting clearly borrows from, refines, and in most cases substantially improves previous rounds of EU regulation of digital technologies.
Despite these improvements, the devil remains in the detail. Below, we consider how the rights the Directive proposes to provide persons performing platform work with respect to algorithmic management may not be fully realised in practice, given the existing landscape of platform design, technological characteristics, and existing regulatory frameworks and challenges. In four thematic categories drawn from the proposal—transparency; human oversight; information and consultation; communication and monitoring—we critique the provisions and illustrate how the proposed Directive or its national transpositions could reduce the scope for platforms to engage in strategic misinterpretations or circumventions of the law, and improve the odds of achieving its core policy aims.
Transparency
The PWD Proposal contains a range of transparency measures throughout the text that can be generally split into ex ante and ex post transparency measures with regard to the timing of the implementation of monitoring systems or the taking of automated or semi-automated decisions.
Ex ante measures
The proposal requires digital labour platforms to, by the first working day, provide workers with a concise, clearly written, transparent, intelligible, and easily accessible document of information concerning automated monitoring systems and automated decision-making or support systems. 15
Automated monitoring
Regarding automated monitoring systems, these need only to be declared, alongside ‘the categories of actions monitored, supervised or evaluated by such systems, including evaluation by the recipient of the service’. Such a provision appears only to provide rights beyond existing data protection law if a conservative view of the current legislation is taken. Data protection law requires prior notification of the purposes of data processing, 16 and broad transparency around the categories of personal data being processed. However, as the GDPR only explicitly requires categories to be provided ex ante ‘where personal data have not been obtained from the data subject’, 17 the PWD Proposal's requirement to notify individuals around categories (of actions rather than personal data) could be seen to strengthen the law. However, this would require a very limited reading of the GDPR requirements, which the European Data Protection Board claims only omit notification of categories where the data subject would have been aware of them already because of an explicit act of collection, 18 as opposed to much technological monitoring which is passive and hard to spot or exert agency over. 19 Similarly, it could be argued that the PWD Proposal would require the declaration of automated systems with a wider scope than the GDPR would, which only explicitly requires the declaration of ‘the existence of automated decision-making, including profiling’ falling within the Article 22 conditions of decisions or measures that are significant and basely solely on automated processing. 20 However, this again would be a weak reading of the GDPR requirements, as the disclosure of such systems (as opposed to the elaboration of their logic) seems required to discharge the obligations of transparency around processing purposes and/or categories of data. Consequently, while the PWD Proposal may give a firmer basis for data protection enforcement, it adds little to a robust reading of existing provisions. 21
Automated decision-making
Regarding automated decision-making systems, however, the proposal additionally requires the ‘main parameters’ of the system to be declared, together with their ‘importance’ and the ways the platform worker's personal data influence the decisions, as well as the ‘grounds’ for a subset of especially significant decisions including refusal of remuneration, termination, or other similarly significant decisions. 22
This provision extends upon previous algorithmic transparency measures, particularly those in the GDPR, in several ways. Firstly, it explicitly includes decision-support systems. This is in contrast to the GDPR, where the automated decision measures are understood by regulators only to encompass systems where human oversight is merely a ‘token gesture’. 23 The extent to which a system is just supporting a decision is hard for a data subject to know or prove, and a legal test requiring a sole basis for a decision raises significant, difficult-to-resolve practical issues in situations where decisions have multiple stages, or inconsistent or varied oversight. 24 Secondly, while in data protection law there is a test to see, in effect, whether a decision significantly affects an individual, in the PWD Proposal the test is whether it significantly affects that individual's working conditions, which is arguably easier to demonstrate and anticipate than are effects on persons themselves, which can vary across individuals. 25
Main parameters: legislative backdrop
The term ‘main parameters’ (with regard to algorithms) is rapidly growing in use within the EU legislator's toolkit, most recently and relevantly in three pieces of legislation related to digital markets. In both the Platform to Business Regulation (P2B Regulation) and the Omnibus Directive (an update to EU consumer protection law), certain traders and search engines are subject to transparency requirements regarding the main parameters determining ranking, with the notion of ‘parameters’ defined in recitals as ‘general criteria, processes, specific signals incorporated into algorithms or other adjustment or demotion mechanisms used in connection with the ranking’. 26 In the Digital Services Act, online platforms presenting advertising must provide meaningful information about the main parameters used to choose adverts, and those using recommender systems must set out the main parameters and any options available for users to change them in clear and intelligible language, including the most significant determining criteria and the reasons for the relative importance of these parameters. 27 Very large online platforms must further indicate in an ad repository any parameters intended to include or exclude any intended groups of recipients from seeing advertisements. 28
The European Commission has drafted guidelines to assist providers with the interpretation of the ranking transparency obligations in the P2B Regulation. 29 This interpretation guide is specific to ranking and search engine systems—so cannot be directly analogised to the situation of algorithmic decision-making and support in the platform work sector—and is not legally binding itself, but does indicate the Commission's thinking as to the meaning of this term. The guidance encourages providers to highlight parameters that may be ‘unexpected’ or ‘useful’, 30 although notes that at a general level, the main parameters should highlight the most important aspects, rather than focus on what users can influence. 31 The guidance provides few insights into how to explain systems that are based on user data and which profile users in complex ways. It provides an example of a system that uses 10,000 variables to characterise business users and which influence the ranking systems; however, it also states that they should provide ‘factual insights’ into the impact of personalisation in general rather than ‘overburdening or confusing’ users by listing variables. 32 The guidance is unclear as to what these ‘factual insights’ would be. Relatedly, the guidance contains a single, particularly unclear and largely unhelpful paragraph on machine learning, characterising it both as a parameter itself in ranking, and as a method by which other parameters are changed. 33 It unfortunate, therefore, that this guidance provides little practical help to understand ‘main parameters’ in the platform work context.
The limits of explanation through parameters
Explaining algorithmic systems has occupied a great deal of scholarly and practitioner attention in recent years. Much of the technical literature on explanations of algorithmic systems has focused on ex post explanations of specific model outputs. These aim to explain a particular output of an algorithmic system in relation to a specific case (a ‘local’ explanation), rather than the system's general behaviour (a ‘global’ explanation). Local explanations might help explain why this particular worker was evaluated in a particular way given their unique circumstances, whereas global explanations explain general patterns across multiple different inputs and outputs. 34
The types of information required in Article 6(2) of the PWD Proposal seem to imply global rather than local explanation. They are to be provided on the first working day, and generic to all workers, so cannot be easily provided in advance of a specific decision moment. 35
Faithful global ‘main parameter’-style explanations cannot, however, always be obtained for every model or system.
Firstly, such explanations typically require constraints on just how complex a model can be. 36 Simple models can be easy to explain. For example, regression analysis is a form of machine learning which is used by many social scientific disciplines specifically to explain phenomena by looking at the size, direction and significance of the model parameters the regression process calculated. 37 However, the equivalent model parameters in a neural network, like synapse weights and biases, make no sense on their own, as they are only one part of multiple layers of transformation data go through before reaching a result or a classification. The nature of the synergistic and non-linear relationships that result can mean that simple parameter explanations are misleading. 38 In some cases, this problem may be avoided by constraining a model to minimise these types of relationships, an approach which does not necessarily come at a huge cost of model performance. There are theoretical reasons to believe that simpler but similarly well-performing models often exist but are overlooked, but finding them can be computationally intensive. 39 However, in some fields, it will be difficult to avoid systems sufficiently complex to resist a useful characterisation through ‘main parameters’.
Secondly, faithful global explanations are not explanations at all if they are not understandable, and this typically requires the input data itself to have some human comprehensible meaning. There is no use saying that a certain piece of data has a positive or negative effect on an assigned propensity for classification if there is no real social understanding of what that piece of data represents. This is an issue in abstract data such as voice, language, or typing patterns, where each part of the dataset has no social meaning—much like explaining an MP3 file. Contemporary use of neural networks and deep learning, which mainly only provide benefits over simpler predictive systems in problems with many variables, suffer from model complexity and data opacity simultaneously.
Insofar as automated monitoring systems may fall under high-risk AI systems for the purposes of the proposed AI Act, they may have interpretability standards placed on them. However, whether these standards place meaningful constraints on system complexity or input data opacity will depend heavily on the standardisation bodies CEN and CENELEC, who are likely to be leading on the substantive standards in this instrument. 40 Furthermore, if the Commission attempts to only mandate a single standard for all AI systems and sectors, they will likely fail to provide much clarity, as it makes little sense to talk about explainability or interpretability in a domain-agnostic way. 41
Consequently, it seems unlikely that Article 6(2) will reliably provide useful information to those subject to it without explicit limitations to model and input data complexity. Legislators should consider whether digital labour platforms should be obliged to limit the complexity of their models to allow such faithful information to be communicated in advance.
Ex post transparency measures
While Article 6 contains the main transparency-related provisions in the PWD Proposal, additional transparency measures are included in Article 8. Following the taking of a decision, platform workers are entitled to ‘access to a contact person designated by the digital labour platform to discuss and to clarify the facts, circumstances and reasons having led to the decision’, 42 as well as a ‘written statement of the reasons for any decision taken’ that falls into certain categories, such as dismissal or refusal or remuneration, or has similar effects to them. 43 While ‘written’ may give the sense of human involvement, it seems plausible that such a statement could be pro forma—or even created by a text generation system.
Given that the human review provisions, described further below, are designed to ensure human contact in an individual case, a simple way to strengthen this provision may be to ensure that written statements must be specifically tailored to individual circumstances. In this case, the ‘local’ explanation type discussed above may form part of the background information used by the designated human ‘contact person’ who can explain both the algorithm's behaviour and the broader socio-technical circumstances leading to the decision. The role of this in broader human oversight is discussed in more detail later in the article.
Furthermore, there are a range of ex post monitoring requirements. Member States must require digital labour platforms which are employers to declare work to the relevant national labour authorities and share data with them. 44 This data must include at least the number of individuals performing platform work and their contractual and employment status, and any general terms and conditions that apply to these individuals. 45 This data must also be shared with worker representatives, and both representatives and authorities can ask for clarifications, to which the digital labour platforms must provide a ‘substantiated reply’. 46
Disclosure obligations could also be introduced for specific further categories of data. These may include the number of algorithmically facilitated significant decisions of different types made, such as termination of accounts, and the number of human reviews undertaken and their results (see further section 3, Human Oversight, below). Textual inspiration could be drawn from the currently proposed Pay Transparency Directive. 47 Aspects of these disclosure obligations could be subject to change through implementing decisions, in consultation with relevant social partners and worker representatives.
Human oversight
There are many reasons for different types of human oversight of algorithmic decisions. Some commentators make the case for more algorithmically supported, rather than fully automated, management decisions. 48 Arguments for this typically rely on functional grounds—that the quality of machine decisions can be augmented with human involvement, 49 can keep up with changing phenomena, 50 or are needed to maintain critical reflection on moral shifts and difficult decisions. 51 Some go further and believe that human review might highlight systemic bias and discrimination, but this should be treated with caution, as evidence indicates ‘humans in the loop’ can both exacerbate and ameliorate these concerns. 52 The PWD Proposal does not require certain types of decisions to never be made using automated systems, instead focusing on the impacts of both automated and semi-automated systems. 53 Regarding automated systems, such monitoring and evaluation includes an obligation to ‘evaluate the risks of automated monitoring and decision-making systems to the safety and health of platform workers, in particular as regards possible risks of work-related accidents, psychosocial and ergonomic risks’, and to assess and introduce ‘appropriate’ measures.
Alongside functional reasons for limiting full automation, there have long been arguments for rights for algorithmic review of automated decisions. This category of right has a long history in European law, with perhaps the first iteration found in one of the earliest data protection laws, in France in 1978. 54 Functional arguments exist for such review rights, such as ensuring individuals have recourse against erratic systems. However, ex post human review rights are typically justified by dignitarian arguments, which identify intrinsic issues with automated treatment, characterising it as dehumanising. 55 The PWD Proposal offers automated decision subjects a right to review and contest significant algorithmically made or supported decisions that affect their working conditions. 56 These are in addition and complementary to the explanation rights discussed above, and could potentially lead to either a rectified decision or compensation.
Strengthening substantive review
Algorithmic contestation rights can be broken down into procedural and substantive dimensions. 57 The PWD Proposal has quite detailed procedural processes for reviewing algorithmic decisions. For decisions significantly affecting working conditions, workers should be able to access a contact person with sufficient competence, training, and authority who must discuss and clarify the facts, circumstances, and reasons that led to a given decision. For a further subset of algorithmically made or supported significant decisions—those equivalent to decisions to terminate or restrict a worker's account, determine their contractual status, or refuse remuneration—the employer must provide a written statement of the reasons for the decision. Where workers are unsatisfied with such reasons, or believe their rights are otherwise infringed, they can request a review. A ‘substantiated reply’ to such a request must be provided within either one or two weeks, depending on the business size, and—where rights are infringed—workers must have a decision rectified, or compensation awarded, ‘without delay’.
Human reviewers are often proposed as a backstop to ensure that flawed automated decisions can be corrected. Yet substituting a human for a machine in a review process does not ameliorate concerns around the consequences of a decision without a substantive standard of review. This involves humans rethinking a decision—humans who, unlike machines, can legitimately interpret open-textured concepts, consider unanticipated circumstances, or choose factors to weigh and the weights to give them. 58 Article 22 of the GDPR lacks such a standard, which is unsurprising given the difficulty in imposing a single standard on the enormous variety of activities that could be within its material scope, given the omnibus nature of the data protection regime.
The substantive element of the review in the PWD Proposal is more elaborate than the GDPR's. Decisions are required to be overturned or compensated where they ‘[infringe] the platform worker's rights’, ‘such as labour rights or the right to non-discrimination’. 59 The standard of review then is defined in terms of the rules that already govern the type of decision being made, effectively turning a review into a new point of enforcement for existing rules. 60 This provision serves both to create accountability around the functioning of algorithmic systems that could affect thousands of workers, as well as to support individual justice around specific decisions. Review rights emphasise an individual remedy. Instead of aiming to improve perceptions of algorithmic procedural justice in all cases, 61 they aim to support individuals in securing alternative outcomes in specific cases of (perceived) injustice. 62
However, the reliance on (typically) national law to set the substantive standard of review for algorithmic decisions highlights a tension at the heart of the PWD Proposal. The algorithmic management section recognises the need to protect both those with employment relationships and those without. 63 Yet without a baseline standard of review, the analysis of automated decisions will often rest on national labour standards around those decisions, which can differ depending on a worker's status. For example, the personal scope of unfair dismissal legislation differs widely across the EU, hinging on different concepts like ‘employee’, ‘employment contract’ or ‘employment relationship’. 64 Workers lacking such statuses will get procedural protections—for example, to ask for reasons—but only a limited substantive test against which such a decision can be challenged.
Yet if the PWD Proposal were to introduce a new baseline substantive test applying to all significant platform work decisions made or supported by algorithms, it would likely be seen as an attempt to harmonise European labour law more broadly through the back door. Such a move is likely to sound the death knell for the proposal in Council. 65 Even for major decisions such as dismissal of an individual in an employment relationship, standards of review in Europe do not in general impinge significantly on managerial prerogative to question whether decisions were economically necessary or feasible, and so adding a higher standard such as proportionality or good faith would be a radical legal transformation. 66 Furthermore, changing the standards applicable to algorithmic decisions, without standardising non-algorithmic analogues, requires justification. In data protection law, there has been some justified judicial unease around interpretations that would lead to decisions receiving a higher substantive standard of review simply because they have been typed into an electronic document, recorded in a database, or sent in an email. 67
Consequently, any substantive changes to standards of review introduced by the Proposal should be focused on ameliorating concerns that arise predominantly because of the algorithmic nature of the decision. It would be inconsistent for a different standard to apply simply because the decision is taken or supported by an algorithm—as if the ‘human’ quality excuses some amount of substantive injustice. Instead, we can focus on how the use of algorithmic systems might end up obstructing the appropriate application of existing standards: for example, due to difficulties in applying the law to opaque systems, or indeed a platform's strategic use of that opacity to deliberately misrepresent, misdirect, or obstruct. To do this does not just require an unspecified human review, but instead requires a type of review which pushes algorithmically made or supported decisions to have the qualities of the genuine, manual decisions we can more easily apply existing rights and obligations to. Human reviews would thus serve to bring algorithmic decisions back into the realm of human understanding, rendering them potentially subject to the same level of scrutiny as familiar human decisions. How might this work?
Algorithmic qualities of automated decisions
Algorithmic decisions are commonly based on variables that have no obvious social meaning, and relationships that cannot be effectively put into words. For example, an audio recognition algorithm takes as input a huge array of features of the signal that are designed to help computers distinguish between sounds, but which have no meaning to humans. 68 It further combines these in manners that are not designed to be expressed in words, such as through multiple layers of a neural network.
Human decisions are not transparent, either. Even where humans explain their decisions in words, this may not correspond to the psychological process that led to the decision. However, for the purposes of the law, humans generally must put their decisions into human terms. We do not dissect brains to find ‘true’ reasons, nor do we generally allow assurances of experience or expertise to substitute for requirements of reasoning. Unfair dismissal law in the UK, for example, requires employers to provide a clear reason based on facts and beliefs held at the time the decision was made—even where the employer had a mixture of reasons, and has not mentioned all of them at the time of dismissal. 69
However, as it stands, algorithms risk being both the mechanism of decision-making and the believed empirical basis for such a decision. This risks circular logic from which there is little recourse: that the decision logic is acceptable because it is designed to produce the most ‘correct’ decisions; and that the decision is acceptable because it is grounded in the input data. Faced with that situation, the human reviewer can do little but to look for obvious bugs or errors, or point at what is available.
An improvement to the PWD Proposal given here is to separate these out.
A review of the decision should involve
This restatement should be combined with a further duty upon the employer to
This altered approach has clear, albeit modest, benefits. It prevents workers being given final, significant decisions which give deference to unfathomable algorithmic reasoning, which will typically fail to respect the individual, and gives a right for a final decision to be made on the basis of human reasoning. The need for a substantive restatement of a decision may make digital labour platforms less likely to see human review as a formality which itself can be passed out to precarious workers, such as through the emerging ‘human in the loop as a service’ industry, 71 as businesses would be liable for both the content and reasoning of new decisions. It differs from a rationality test, which could be discharged by noting that reliance on a certain algorithmic method or approach was a de facto industry standard—or at least not unique. 72 The proposed approach does not guarantee a dignity-preserving outcome—a worker could still consider such a decision to be based on arbitrary characteristics irrelevant to performance at work 73 or to lack causal justification, 74 or the observed data might not capture underlying concepts. 75 These would remain domains for labour law, and the tests relevant to the type of decision in question. However, this approach at least sets the stage for these tests to be carried out, in a way which is politically feasible within this piece of legislation and this level of integration of European labour law.
Information and consultation
The PWD proposal develops the requirements in the Information and Consultation Directive 76 for the particulars of digital labour platforms. In doing so, it makes some optimistic assumptions about the type of representation that may emerge, particularly around platforms where organisation may be difficult due to the business model or its cross-border and multilingual nature. We consider these issues in turn below.
Formal information and consultation
Article 9 sets out the main requirements regarding information and consultation. It requires that Member States ‘ensure information and consultation of platform workers’ representatives or, where there are no such representatives, of the platform workers concerned by digital labour platforms, on decisions likely to lead to the introduction of or substantial changes in the use of automated monitoring and decision-making systems.’ 77 Algorithmic management therefore gets heightened treatment compared to other workplace choices, with the terms ‘information’ and ‘consultation’ aligned with the Information and Consultation Directive. 78 Minimal procedural standards and protection for employee representatives and information under this Directive also apply to consultation under the PWD Proposal. 79
These provisions apply only to ‘platform workers’—that is, those persons performing platform work have employment contracts or employment relationships. 80 They do not apply to ‘persons performing platform work’ who are not ‘platform workers’—legitimately self-employed persons performing platform work. These individuals are expected to be supported by relevant rights under the Platform to Business Regulation, 81 at least insofar as such individuals are ‘business users’ within the meaning of that Regulation. 82 The Commission has also announced Guidelines that clarify that persons performing platform work can be exempted from EU antitrust enforcement for the purposes of collective bargaining. 83 The possibility for certain self-employed persons to bargain collectively is a relatively novel development that may eventually create the policy context for information and consultation rights for specific groups of self-employed persons, which have typically remained out of the scope of such rights. 84 As the Commission recently noted, ‘[d]igital labour platforms are usually able to unilaterally impose the terms and conditions of the relationship, without previously informing or consulting solo self-employed persons[performing work on the platform].’ 85
Furthermore, the Commission relies on the data protection treaty basis to extend rights such as transparency and human oversight to self-employed persons in the PWD Proposal; it is unclear whether this basis would stretch to rights pertaining to working conditions, such as information and consultation rights relating to persons performing platform work, and further unclear upon which basis the Commission would be able to rely to do this. 86
Consultation without representatives
The PWD acknowledges two scenarios for the fulfilment of digital labour platforms’ information and consultation obligations: either with platform workers’ representatives, or ‘where there are no such representatives, of the platform workers concerned’. 87 The digital labour platform's obligations to inform and consult are the same in either scenario. The proposed Directive largely relies on the provisions of the Information and Consultation Directive to specify the rules platforms must follow to meet those obligations. 88 Yet while that Directive gives wide discretion for Member States to ‘determine the practical arrangements for exercising the right to information and consultation,’ 89 the primary negotiating counterparty for the employer foreseen in that Directive is the ‘employees’ [representative].’ There is little guidance indicating how the process of information and consultation might proceed without representatives. 90
This lack of guidance ‘platform operators’ decisions to bias information and consultation procedures in the absence of worker representatives, who could otherwise help to ensure both their fairness and their methodological integrity. For example, influential Uber consultations in the United States, which made heavy use of surveys suffered from serious methodological flaws, including poorly designed survey questions, which significantly biased consultation results. 91 These risks are particularly acute given many platforms’ relative lack of experience with worker consultation.
Although the Information and Consultation Directive requires that ‘the timing, method and content [of consultation] are appropriate,’ 92 the lack of well-known ‘best practices’ for consultation in digital labour platforms, especially when there are no formal representatives, may pose a challenge to effective consultation even when platforms undertake to fulfil their consultation obligations in good faith. It is unclear what ‘appropriate’ methods could or should be. This is further complicated by the remote nature of most work organisation in platform work, even where the work itself is performed at a specific location—and effective practices for digital consultation procedures are just beginning to be worked out. 93
Consultation across languages and borders
The remote nature of platform work organisation also provides for the possibility that workers may be in many different Member States. Considered from an economic perspective, this is one of the advantages of the platform business model: one platform, at least in theory, can match clients and workers in many countries with roughly the same administrative costs as if it were limited only to one country. However, in the context of information and consultation, this creates at least two challenges.
The first is the challenge of language. Generally, consultation in many languages arises in the context of very large multinational corporations with the resources to provide translation and interpreting services for worker representatives when appropriate: for example, at the level of European Works Councils. However, digital labour platforms may not have the resources or competence to facilitate high-quality multilingual consultation processes. Recognising the cost challenges of multilingualism and platforms, under the Digital Services Act only very large online platforms are obliged to provide terms and conditions in all languages of the Member States in which they operate, and even these largest platforms do not need to communicate with users or regulators in the languages of the markets in which they operate. 94
The second is the challenge of diverse laws and customs for worker representation and information and consultation. If a platform has a significant number of workers each in a number of Member States, workers in different Member States may prefer to make use of the representative structures and information and consultation procedures customary to their own countries; indeed, because platform workers will typically be employees in national law, digital labour platforms will likely be required to comply with different requirements for worker representation and information and consultation in different Member States. This represents a tension between a lack of European-level legal harmonisation and the platform's desire for de facto harmonisation in order to operate at scale and provide what they understand to be their core business proposition: to provide and maintain a single set of technical features and ‘workflows’ for all users, to the extent possible, irrespective of their location. It is widely believed that it is exactly this ‘scalability,’ enabled by the ostensible universality of a single technical design, that creates the cost advantage for the platform business model over more ‘traditional’ business models. 95
The costs of high-quality consultation recognising diverse languages and procedural norms seem likely to incentivise platforms to seek legal or technical ‘workarounds’.
Adaptive consultation
From a policy perspective, it would ideally be desirable to establish concrete minimum standards for information and consultation where no representatives exist, across languages and across borders. Such standards could reduce the risk of low-quality procedures that effectively bias outcomes, for example by choosing or briefing particular influential workers or experts, through ‘agenda-setting’ power (e.g., choosing or selectively framing options in worker surveys), 96 or simply through exclusion of significant groups of workers.
Yet participation of platform workers in the design of platform work remains a very new field of activity, with relatively limited research on it as a proportion of research on digital labour platforms generally. 97 Solid evidence-based standards may simply be out of reach in the short term. 98 This places it in a different situation from traditional regulation, which assumes that the regulated entity knows the risks faced and costs, as well as approaches to mitigate them, while the regulator does not. 99 Reflexive mechanisms for regulating in the presence of uncertainty, such as planned adaptive regulation, are frequently used in areas such as health, safety, or environmental law, 100 and proposed in technological areas such as robotics or connected devices. 101 They may hold lessons here, but it is worth noting that these approaches typically involve the regulation of hazards, while quality of work and industrial relations should be treated differently than risks to be managed. Consequently, such adaptive regulatory mechanisms are likely to need heightened accountability to ensure that knowledge creation on effective consultation is both occurring and aligned with the interests of workers.
Such mechanisms supporting alignment generally fall under discussions of meta-regulation. Situations where meta-regulation is needed tend to be characterised by both high uncertainty and entrenched non-compliance, with regulators needing to motivate regulatees to engage in meaningful experimentation, have the capacity to independently analyse and scrutinise regulatees’ reported progress, and have a stable enough political environment to engage in incremental learning. 102 However, the PWD Proposal has little to say about the capacity of regulators, with obligations on cooperation and information sharing but no clear institutions proposed through which to do so. 103
Several approaches inspired by meta-regulatory practices can be envisaged that may improve the proposal in this respect. Digital labour platforms could be obliged to have regard to the state of the art when designing information and consultation practices, with a view to avoiding bias or other low-quality processes. Publication requirements by digital labour platforms detailing the choice of topics and methods for information and consultation, and the reasoning behind those choices, may facilitate learning and improvement in this area. Giving workers or representatives explicit rights to file objections to both platforms and relevant authorities regarding potentially deficient, biased, or otherwise inappropriate information or consultation artifacts (e.g., informational text) or processes, and where filed to the platform to receive a timely and substantive response, may further facilitate improvement and accountability. Member States could be directed to ensure that the relevant authorities—typically labour ministries or inspectorates—have the right to direct platforms to change their consultation procedures and information artifacts if they fail to meet expectations. Such expectations could be set through shared research programmes on the state of the art, guidelines set collaboratively by Member States, specifically mandated transnational studies, or implementing acts the PWD Proposal could empower the Commission to pass. Further to these aims, the Commission should explicitly study the areas we have highlighted above around borders, languages, and representation without established worker representatives.
Communication and monitoring
A novel element of the PWD Proposal is the obligation on digital labour platforms to design in, through the platform's digital infrastructure or similarly effective means, the possibility for persons performing platform work to contact and communicate with each other and through representatives. 104
The way in which communication is facilitated by digital labour platforms is likely to influence how and if the information and consultation discussed above is realised in practice. Communication is key for platform workers to make sense of information received from digital labour platforms and to collectively form opinions related to it. 105 Even location-based platform workers are not typically co-located, posing challenges to typical communication. 106 The importance of the specific technical characteristics and administration procedures of tools for supporting platform workers is well-documented in the technical literature. 107 Research on online forums in diverse domains, including education 108 and political deliberation, 109 shows that the design of these communication channels plays a significant role in shaping the outcomes achieved by participants using them. In the context of platform work, if communication channels provided by digital labour platforms impede effective deliberation—including by being difficult to use, poorly organised, technically unreliable, or otherwise—then it will be difficult for workers to consistently ‘make collective sense’ of information they are receiving and to effectively be consulted, 110 regardless of whether the platform engages with good faith in consultation efforts. Furthermore, for self-employed persons performing platform work who do not benefit from the Article 9 information and consultation provisions, the communication channels provided by the digital labour platform may provide the primary or only practical possibility for informal collective discussion of working conditions, platform design, or operating procedures, or for organising in the ways foreseen by the Commission in its Collective Agreement Communication.
Even in the absence of formal information and consultation, such informal discussion can lead to improvement in working conditions, for example by highlighting technical problems experienced by many those performing platform work, or by identifying potential improvements to platform procedures. Indeed, some platforms already facilitate such discussions for exactly this reason; in such cases, informal discussion among those performing platform work can be seen by the platform as a sort of additional informal and ongoing layer of quality control and testing for their platform design, technology, and procedures. 111 Care has to be taken, however, that the existence of such processes of feedback, intrinsically linked to agile software development where users are constant subjects of experimentation, 112 are not considered substitutes for ex ante consultation processes.
Challenges of civic technologies
The requirements set out in Article 15—channels through which workers can communicate with one another and with representatives, and which are not monitored by the platform—seem like the most fertile ground for ‘collective sensemaking’ of data provided by the platform, and subsequent opinion formation amongst persons performing platform work. This is especially important in the absence of formal representatives, either where they do not exist or, in the case of truly self-employed persons performing platform work, when there is no provision for them.
The literature on civic technologies—forms of collaboration, coordination, deliberation, resource gathering, and sharing—is arguably the closest to what the provisions in the PWD Proposal seek to create. These technologies are not straightforward to create or maintain, and require significant time, intent, and resources, with dedicated maintenance and support staff that have dual roles in policy and technology. 113 Currently, workers use a variety of communication platforms, such as Facebook, to discuss and collaborate. Such organising is somewhat mobile if the tools do not meet their needs. It already seems challenging for a digital labour platform to itself create a robust, useful, and trusted deliberative platform. These challenges are heightened in the adversarial situations that can characterise some employer-worker relationships, such as the platform worker strikes seen across Latin America during the Covid-19 pandemic. 114 Creating rich deliberative technologies, particularly at scale, requires active effort beyond just opening a communication channel and letting it thrive, and close connection with designers—who in the PWD Proposal are forbidden from processing information about communications while reflecting and improving the communications system.
The exact way that the communications obligations will play out likely awaits evaluation. But there are ways to strengthen the legal texts to make sure that a dead communications platform does not hinder organising. As mentioned, workers typically choose methods of organising off-platform, which may be accessible through links or requests to join. The PWD Proposal could state that a necessary feature of a communications platform would be to allow users to easily find links posted by workers directing them to other platforms that are run, circulated, and approved by significant numbers of workers. Such provisions could be further amended via guidance or implementing legislation. In effect, the communications obligations should at minimum function as a gateway to unite workers and give them channels of communication, even if they are not the home of deliberation on the platform.
Confidentiality of communications
Platforms are forbidden from accessing the contents of communications in the systems they have established, as well as from processing data relating to private communications of workers more generally. 115 This initially seems like a strong and sensible proposal designed to prevent employer interference in worker organising. Employers are already regularly criticised for the ways they navigate the sharing of confidential information in information and consultation procedures with workers, and it is likely some will be tempted to monitor devices and sharing to identify leakers. 116 However, the strong blanket prohibition creates some difficult tensions.
Communication mechanisms typically require collection of some personal data. At the most confidential, we can consider communication apps such as Signal or ProtonMail, instant messenger and email services respectively, which heavily encrypt and minimise personal data collection. These are amongst the most confidential of widely available communication tools, yet still both services do require the collection of some metadata. Other mainstream encrypted or partially encrypted communications systems, such as iMessage, WhatsApp or Telegram, collect significantly more metadata. In sum, ensuring that no metadata is collected remains difficult for mainstream instant messaging services. 117 Furthermore, there are technical challenges which make it difficult for encrypted instant messenger services to encrypt communication to very large group chats where individuals wish to broadcast to a group, such as may be required for very large digital platforms. While there are regular developments in protocols and standards, these remain open challenges. 118 Implementing emerging protocols securely certainly would challenge smaller digital labour platforms, and highly secured instant messengers still bring usability challenges. 119
Furthermore, removing or minimising the collection of information within workplace communication tools raises a range of challenges in specific situations. What happens if a worker is being harassed by a colleague online and an investigation is required? What happens if accountability is needed for a decision within a hierarchical organisation? What happens if a platform worker discusses personal data of a client with another worker, and that platform receives a subject access request to disclose it? At some point, a digital labour platform may have to collect information that relates to private discussions. 120 This can be made difficult if messages sit on workers’ devices, particularly if those devices are not subject to invasive policies such as mobile device management which require users to surrender control of their devices. 121 This may not even be possible if they answer to more than one digital labour platform.
The challenge around communications systems and data workers may hold about clients is tricky. An amended PWD Proposal may need to prohibit data processing for certain purposes, such as the monitoring or evaluation of employees or their activities, whilst permitting it for other cases, such as ensuring the security of the system, or investigating genuine issues of harassment or other activity. Yet worker perception of employee surveillance may matter just as much as actual employer data practices. 122 This seems particularly likely for digital labour platforms, whose tendencies towards algorithmic management themselves motivated this provision.
Awareness and monitoring
Finally, several provisions of the PWD Proposal require platforms to understand platform workers’ context and the impact work has upon them. These provisions run into practical tensions relating to methodology and data collection, particularly in the context of today's digital economy.
Monitoring impact
Complementing the ex post human oversight or review discussed above, the proposal also creates ex ante and continuous obligations on employers to ‘regularly monitor and evaluate the impact of individual decisions taken or supported by automated monitoring and decision-making systems’. 123 Such mechanisms are complementary to the damage-control approaches of ex post review of algorithmic management. 124 Elsewhere, the proposal highlights that automated monitoring and decision-making systems must not place undue pressure on platform workers or place their physical or mental health at risk. 125
A tension exists regarding data collection and the monitoring of impact. The type of impacts the proposal is concerned with include psychosocial risks. It is not clear that a digital labour platform, particularly one prohibited from processing data on the psychological state of workers, their communications or their broader lives, 126 will be able to assess such impact. That is not to say that the firm should have the right to monitor these characteristics. Instead, it seems appropriate that an exemption is possible within the prohibitions to facilitate data collection according to a monitoring and evaluation strategy that is co-designed by workers and employers. Participatory design with workers on driving platforms has highlighted the desires of drivers to be able to use data in ways they wish relating to their own work to maintain financial, physical, and psychological well-being. 127 This work further highlights the importance of qualitative, rather than automatically measured quantitative data, in understanding these issues.
Furthermore, the PWD Proposal does not propose documentation or publication requirements on the monitoring and evaluating it outlines. Different levels of such visibility could be envisaged, such as reporting to employees and to the general public at different levels of granularity. While it is likely that such platforms will already have to complete a data protection impact assessment for algorithmic management system, 128 there is no obligation to publish this document. 129 The proposal could both synthesise the DPIA requirements with the monitoring and evaluating requirements and ensure they complement each other, as well as outline procedural obligations on timing, participation, and publication.
Online tracking and prosumer platforms
The prohibition on collecting personal data when individuals are not offering or performing platform work does not have exceptions. 130 Yet even this prohibition itself requires an understanding of when offering or performing work begins, ends, or is paused, which can be difficult for tasks which are longer, less delineated, or occur in the context of multi-homing (the use of more than one platform simultaneously).
This tension is particularly stark, given the ubiquitous nature of data collection in the contemporary digital economy. Many digital labour platforms also run large online tracking infrastructure. For example, Amazon runs embedded infrastructure with tracking potential in approximately 18% of mobile applications. 131 Workers using their Mechanical Turk platform can log in using the same accounts they may use for other Amazon services, which in turn can be associated with tracking cookies and similar technologies. When individuals are tracked by this infrastructure, this is clearly an example of a digital labour platform collecting data when individuals are not performing platform work, which may violate Article 6(5)(d).
Relatedly, individuals may interact with such platforms as both a consumer and a provider of services facilitated by digital labour, and thereby become subject to the informational infrastructures on the consumer side of the equation. Platforms have simultaneously banned individuals from both selling and consuming services, indicating how reputation systems on both are not currently firewalled. 132 More broadly, the distinction between producer and consumer has long been collapsing online, and marketers have been considering how to actively transform consumers into providers in platform contexts. 133
Reconfiguring the prohibition
The exemption-free prohibition on collection is unclear and perhaps unworkable, given the above blurred lines. In practice, it will hit reality, and risks forcing regulators and courts to create interpretative loopholes or enforcement gaps which may impact the legitimacy and credibility of this effort. The cleanest option to deal with the challenges around online tracking and prosumer trends is likely to be to silo this information, preventing the processing of personal data across contexts in which a company may operate, and setting up a requirement for organisational firewalls. 134 While allowing those performing platform work to take action against surveillance giants’ broader business models may be an interesting litigation strategy, policymaking to tackle online tracking should be designed explicitly with those aims in mind. The PWD Proposal may wish to go further and push back against prosumer blurring by requiring accounts to be separate, although this is likely to meet resistance from those that believe such directions hold innovative and desirable business models.
Concluding remarks
In the last decade, EU technology regulation has set high global standards, which have promulgated beyond its borders (the so-called ‘Brussels Effect’). 135 Meanwhile, corporate lobbying of EU legislation has dramatically increased during the same time period. 136 Attempts to regulate platform work are, and will be, no exception. 137 In this article we have sought to articulate the ways in which the fundamentally sound algorithmic management provisions, even if fully implemented and robustly enforced, might nevertheless be conveniently (mis)interpreted by platforms to undermine the original aims of the proposed Platform Work Directive. Technical reasons may be offered to argue for the necessity of algorithmic management practices which run counter to those original aims, while other technical reasons may be offered purporting to speak to the impossibility of adhering to them. Similarly, service designs which undermine those goals may be justified on grounds of user needs, with the interests of workers and customers played off against each other in ways that ultimately serve those of the platform. Yet we believe none of the requirements of the PWD present insurmountable technical challenges, nor do they run counter to the interests of users (whether platform workers or customers). By anticipating and confronting these possible convenient misinterpretations well before they are encountered in enforcement action years from now, EU legislators and regulators can better ensure that the PWD successfully protects platform workers in practice.
Footnotes
Acknowledgement
Thanks to the participants of the Regulating Algorithmic Management workshop at Magdalen College, Oxford in July 2022 for comments on an early presentation, and to Virginia Mantouvalou and Hugh Collins for comments and discussion around parallels with unfair dismissal.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of thisarticle.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: MSS is funded by the European Research Council (iManage, no 947806); MV receives funding from the Fondation Botnar.
