Abstract
The potential for algorithms to discriminate is now well-documented, and algorithmic management tools are no exception. Scholars have been quick to point to gaps in the equality law framework, but existing European law is remarkably robust. Where gaps do exist, they largely predate algorithmic decision-making. Careful judicial reasoning can resolve what appear to be novel legal issues; and policymakers should seek to reinforce European equality law, rather than reform it. This article disentangles some of the knottiest questions on the application of the prohibition on direct and indirect discrimination to algorithmic management, from how the law should deal with arguments that algorithms are ‘more accurate’ or ‘less biased’ than human decision-makers, to the attribution of liability in the employment context. By identifying possible routes for judicial resolution, the article demonstrates the adaptable nature of existing legal obligations. The duty to make reasonable accommodations in the disability context is also examined, and options for combining top-level and individualised adjustments are explored. The article concludes by turning to enforceability. Algorithmic discrimination gives rise to a concerning paradox: on the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. Judicial and legislative routes to greater transparency are explored.
Introduction
Biases in automated decision-making systems (ADMS) are now well-documented, 1 and algorithmic management tools are no exception. The debate on how to deal with this phenomenon has developed at speed in the legal and policy literature. A report for Equinet in 2020 noted that the European debate was at an ‘embryonic’ stage, 2 but a report for the European Commission in the same year found that a ‘majority of countries’ in the Union had witnessed ‘at least some degree of public discussion on issues of AI and discriminatory risks’. 3 While the discussion has since matured, the question of how to deal with algorithmic discrimination remains an open one. Legislators in the US and EU have tabled proposals which would tackle the issue tangentially, by imposing new requirements with a view to reducing the risk of bias. 4 Neither proposal would alter the framework of discrimination law itself. In the UK, a 2018 proposal to deal with discriminatory automated decision-making in the workplace was resisted by the Government on the basis that ‘the Equality Act [2010] already protects workers against direct or indirect discrimination by computer or algorithm-based decisions.’ 5
The employment sector has long been a key forum for the development of equality law and has emerged as a focal point for academic discussion on algorithmic discrimination. 6 This contribution adopts the same focus to assess the capacity of EU law to respond to algorithmic discrimination: do gaps in the framework exist, and if so, how best should they be filled? Section 2 argues that the concepts of direct and indirect discrimination are already apt to cover many cases of algorithmic bias and explains how some apparently novel doctrinal questions are in fact examples of how algorithmic discrimination ‘shines a new light on…“traditional” problems…of EU gender equality and non-discrimination law’. 7 Legislative proposals to deal with algorithmic discrimination should therefore aim to reinforce, rather than reform, discrimination law. While the rules on direct and indirect discrimination are not specific to any protected characteristic, employers bear a particular duty with respect to persons with disabilities: the duty to make reasonable accommodations. Section 2 goes on to consider the extent to which that duty can be met in a context of algorithmic decision-making. Section 3 lays out a plan for reinforcement of existing law: the obstacles which information asymmetries pose to litigation must be a critical target for any legislative proposal. Algorithmic impact assessments are proposed as a legislative reform which would enable greater transparency.
The robustness of existing law
Computer science scholarship on the unequal effects of ADMS 8 quickly inspired incisive legal scholarship. 9 EU law provides that direct discrimination occurs where an outcome is ‘on grounds of’ a protected characteristic, while indirect discrimination can arise when an employer applies a facially neutral provision, criterion or practice (PCP) that puts people with a protected characteristic at a ‘particular disadvantage’. 10 Direct discrimination is outright prohibited in most employment cases, whereas unlawful indirect discrimination does not arise if the use of the PCP can be objectively justified. 11
Indirect discrimination
The use of an algorithm can easily be understood as a provision, criterion, or practice (PCP), 12 and if a job applicant or worker can show that the deployment puts their protected group at a disadvantage—and that they have been, or would be, disadvantaged by its use—then they can make out a prima facie case of indirect discrimination, thus shifting the burden to the employer to justify the use. To do so, the employer will first need to identify a legitimate aim which the ADMS has been deployed to achieve. This initial step may well prove uncomplicated. If human decision-making has been automated purely for cost-saving purposes, that is unlikely to be adequate. 13 However, Hacker points out that the ‘predictive task of the algorithmic decision-making process itself will often furnish a legitimate aim’. 14 An employer might argue, for example, that the algorithm is being deployed to measure employee success.
If the aim is accepted as legitimate, the focus turns to proportionality. The algorithm must be appropriate for achieving the aim, in that it is a suitable means of achieving the stated aim; and its use must be necessary, in the sense that it is the least discriminatory option available and strikes an acceptable balance between the rights and interests of employer and employee. 15
One promising point should be drawn from this brief summary: where a less biased but equally accurate algorithmic system can reasonably be obtained and deployed, then the employer should take that step. 16 The obligation to use the least biased option arguably also means that employers should ensure that reasonable debiasing efforts have been made. 17 On the other hand, scholars have suggested that the nature of algorithmic decision-making means that the proportionality test will be routinely satisfied for indirect discrimination purposes: 18 a concerning possibility which requires consideration of two questions. First, will courts inevitably hold that predictive algorithms are more ‘accurate’ than humans, and therefore an appropriate and necessary means of achieving a legitimate aim? Second, is use of a biased ADMS permitted—or even required—if it replaces a more discriminatory human?
More effective than the human alternative
Hacker notes that ‘that there must not exist similarly effective but less discriminating ways of achieving the same result’ but argues that ‘to the extent that the classifier does possess significant predictive accuracy, its effectiveness will likely surpass any alternative ways…based on human decision making.’ 19 He suggests that the entity employing the algorithm will therefore ‘likely be able to convincingly claim that there is no similarly effective alternative to machine learning’. 20
Assuming that algorithms with high ‘accuracy’ scores are almost always ‘effective’ or ‘appropriate’ is problematic. Say, for example, an employer uses a machine learning model to give job applicants a ‘suitability score’. The algorithm is trained on previously successful applications, and the features it identifies as being correlated with successful performance include speaking slowly, 21 using active verbs, 22 and playing certain sports. 23 Although the employer has not analysed why these traits are correlated with success, it continues to use the algorithm because its predictive accuracy is high—in that those who score well go on to receive positive (human) managerial assessments and are quickly promoted.
If it turns out that female applicants speak more quickly, use more tentative language, and do not play the relevant sports, will a rejected female applicant succeed in a case for indirect discrimination? The employer's aim of recruiting competent workers seems to be legitimate, and deploying the algorithm appears to be an effective means of achieving that aim. On the other hand, one might ask what the algorithm's ‘predictive accuracy’ really tells us. The ‘suitability scores’ might be correlated with speedy promotion, but the promotion itself occurs in an unequal world: the algorithm has learned the traits of those who do well at the company, but the human managers could be implicitly biased in favour of men. In other words, the algorithm has been fed data from an unequal world and is accurate at predicting unequal outcomes in that world. 24
Deeming such predictions ‘appropriate’ does not accord with the ‘fundamental aim’ of EU non-discrimination law, which Wachter, Mittelstadt and Russel describe as being ‘not only to prevent ongoing discrimination but also to change society, policies, and practices to…achieve substantive rather than just formal equality’. 25 In this way, ‘EU non-discrimination law…aims to systematically erode inequalities over time’. 26 An algorithm may be accurate in predicting the status quo, but if the status quo is not neutral, then does this really count as ‘accuracy’? 27
The legal framework provides scope for courts to answer that question in the negative. For example, where an employer uses an algorithmic system which predicts ‘culture fit’, 28 the court could take a step back in the analysis to hold that this is simple maintenance of an unequal status quo, which is not a legitimate aim. 29 Alternatively, rather than accepting predictive ‘accuracy’ as an indicator of appropriateness, the courts could require employers to explain why the features identified by the algorithm are appropriate for achieving the given aim. 30
On this latter point, Hacker suggests that the decision-maker is ‘not under an obligation to specifically explain the factors that led to the algorithmic output’, and there does not appear to be a need to explain why the factors are relevant because the focus is on the ‘effectiveness of the differentiation practice’. 31 This may be the approach in existing case law, but it is not required by the legislation. Discussing substantively similar US law, 32 Selmi argues that unexplained predictive accuracy should not be enough: where an employer's justification for algorithmic disparate impact centres on the algorithm's ability to identify characteristics correlated with job success, ‘it is difficult to see a [US] court accepting such an explanation’ without more, ‘particularly when the algorithm may well have been created on biased data’. 33 European case law could similarly evolve to reject the ‘trust us’ defence 34 in relation to algorithmic ‘accuracy’, and take a more stringent approach to ‘appropriateness’.
Less biased than the human alternative
If an aim is legitimate and the algorithm is deemed ‘appropriate’ in meeting the aim, then the court will ask whether the algorithmic option is less biased than the human alternative. Although algorithmic bias is now a well-studied problem, algorithmic systems may still be less biased than the humans they replace, 35 and employers may well adopt such tools with a view to fostering diversity and equality. 36
Imagine, then, two employers: A and B. Both are aware of discrepancies in outcomes between protected groups in the managerial evaluation system, 37 and both are considering whether to adopt an algorithmic management tool to measure employee performance in an objective manner. Both weigh up the evidence about the diversity-enhancing potential of people analytics tools against the evidence of potential discriminatory outcomes. On balance, employer A is concerned about its managers’ implicit biases and decides to replace managerial assessments with an algorithmic tool to score employee performance. The scores will be considered alongside sales metrics when making promotion decisions. Meanwhile, employer B is concerned by reports of algorithmic bias and decides to continue relying on human manager reviews. If the tool which A has begun to deploy has poorer scores for women on average, but nonetheless exhibits less bias than the human managers at either firm, how would an indirect discrimination claimant fare against each employer?
First, consider a woman working at employer A. She argues that the use of the tool to inform promotions is a practice which puts women at a particular disadvantage. The employer does not deny that the tool produces lower scores on average for women, but argues that its use is objectively justified: the tool was (i) adopted to achieve the legitimate aim of objectively evaluating workers; (ii) is at least as accurate as the human evaluators, in that those scored highly by the algorithmic tool go on to perform well in their new roles;
38
and (iii) is the least biased means of achieving the aim. Any indirect discrimination is outweighed by the importance of the aim.
39
As explored above, the employer's justification may fail at stage (ii)—but if the appropriateness of the tool
Hacker suggests that if the employer is using the least biased algorithm available, then the proportionality analysis should ‘turn on whether the algorithmic decision-making procedure at least significantly and verifiably reduces bias vis-à-vis other types of (non-algorithmic) decision-making procedures’. 40 If it does, then ‘algorithmic discrimination based on biased training data should…be considered proportionate’. 41 That indeed seems to be the most likely conclusion: following this analysis, employer A—who chose the least biased option available—can justify its use of the tool. 42
Let us now consider an equivalently situated woman working at employer B. The employer's practice in this case is the use of human managerial evaluations for making promotion decisions. Female employees at the firm receive poorer evaluations on average, and the claimant has not been promoted in several years despite doing well on all other measures. 43 The employer argues that selecting the best employees for promotion is a legitimate aim, and that using (human) managerial evaluations is a proportionate means of achieving that aim. In this case, the result will turn on how the court or tribunal assesses proportionality on the facts. The use of managerial assessments may be appropriate, but is it necessary if deployment of the algorithm is a reasonably practical and equally effective alternative? If the employer recognised the human bias and dismissed the algorithmic alternative while taking no further action, then the court may find that it has failed the objective justification test. If so, one might conclude that the employer is effectively required to automate the process.
In one sense, this outcome makes sense: where reasonably practicable, employers should use the least biased means available for reaching their legitimate goals. On the other hand, the conclusion seems jarring when one considers broader equality goals. 44 Accepting and potentially even (indirectly) mandating the use of a biased algorithmic system seems to entail an entrenchment, rather than an erosion, of inequality. On closer examination, however, one finds that the employer's failure to use the algorithm will usually only be part of the analysis. In reality, other options are available. The employers could mandate implicit bias training for all employers, for example, or adjust the weighting given to the evaluations in decision-making processes. While employer B might be found to be indirectly discriminating if it does nothing at all about its biased human evaluations, it is unrealistic to imagine that a court would ‘require’ it to use a specific algorithmic tool. Similarly, employer A would be well-advised to consider implicit bias training for its human managers as an alternative to deploying a biased algorithmic tool; and even if its use of the tool is justified on one occasion, the objective justification regime is not static: a practice which starts off being justified may cease to be so. 45 In this sense, EU law's broader substantive equality goals are not lost.
Direct discrimination
The above analysis shows how the indirect discrimination framework is apt to deal with many cases of algorithmic discrimination, despite initial doctrinal concerns. At the same time, indirect discrimination should not be used as a ‘conceptual “refuge” to capture the discriminatory wrongs of algorithms’. 46 Although many European scholars have regarded direct discrimination as comparatively ‘less important’, 47 the regime has an important role to play. 48 Where direct discrimination arises, the practical consequences are significant: while the possibility of justification (and its accompanying complexity) is a crucial component of the indirect discrimination regime, direct discrimination is often straightforwardly unlawful. 49
There are broadly two types of direct discrimination case which appear in the existing European case law, both of which can be mapped onto examples of algorithmic discrimination. 50 The first occurs where a criterion used is inherently discriminatory. The CJEU held in 1988 that pregnancy is so closely related to sex, for example, that differential treatment on the basis of pregnancy constitutes sex discrimination even if there was no intention to discriminate against women. 51 If an algorithm generates a lower score when presented with the name of a girls’ school rather than a boys’ school, its deployment (it is submitted) will similarly constitute direct discrimination. 52
The second type of direct discrimination is subjective. This applies where a protected characteristic affects the decision-maker's decision, whether consciously or not. 53 Elsewhere, Adams-Prassl, Binns and I argue that where an algorithm exhibits traits that are equivalent to human subjective bias, 54 the law must respond in kind. In other words, where an algorithm learns to prefer the traits of a particular group because that group is overrepresented in the training data due to previously biased human decisions, this is equivalent to implicit bias, and must be direct discrimination.
This latter form of discrimination gives rise to a difficult doctrinal question: where does the line fall between direct and indirect discrimination? At section 2.1.1 above, it was suggested that the use of an algorithm which identifies as positive traits slow speech, use of active verbs, and playing of certain sports might not be an appropriate means of achieving a legitimate aim if those traits simply map onto a particular protected characteristic. However, if the algorithm has learned to replicate previous recruiters’ biases, then we might instead conclude that this is subjective direct discrimination in an automated form—and the justificatory stage (with its questions of appropriateness) will not even be reached.
Drawing the boundary
Drawing the boundary is difficult because while indirect discrimination looks at the effects of a decision-making process, direct discrimination focuses on reasons. 55 Rationales for human decisions are often difficult to divine: in the absence of a smoking gun, it will be difficult to prove discrimination on the part of any manager, even if a particular ethnic group consistently performs more poorly in that manager's assessment. 56 Humans can rationalise their decisions ex post facto. 57 By contrast, the true reasons for algorithmic outputs are often identifiable—and even where an algorithm operates as a black box, its training data can be examined in a way that would never be possible for humans. While automated decision-making therefore provides a promising opportunity to uncover direct discrimination, it also means that the question of whether a decision has been made ‘on grounds of’ a protected characteristic is likely to become more complex.
Complexities arise even in relation to inherently discriminatory proxies. Which algorithmically identified features will be deemed ‘inherently’ discriminatory? Gerards and Xenidis note that while a feature such as pregnancy is clearly inherently related to sex, the Court's jurisprudence has not determined whether ‘a nearly 100% overlap is required, or [whether] it would enough to show statistically that a certain variable…has a 90% or 80% overlap with a given protected ground’. 58
Similarly, in the context of ‘subjective’ algorithmic discrimination, where the model has apparently learned to replicate implicit bias, it will be necessary to assess whether the features are indeed the result of automated implicit bias or just the ‘ground truth’. 59 For example, slow speech and active verbs might be genuine indicators of strong sales metrics, but if the employer's clients prefer men and men use slow speech and active verbs, then the sales success prediction risks collapsing into gender prediction. If the algorithm is effectively predicting who the clients will like, and the clients like men, then using that algorithm for promotion decisions could constitute direct discrimination. 60 How does one distinguish between merit and societal preference in a systemically unequal world? 61
Finally, existing interpretations of direct discrimination law do not seem to capture all cases in which equivalently situated individuals are treated differently because of an interaction with a protected characteristic: some algorithms simply work less well for certain protected groups because those groups are underrepresented in the training data. If a facial recognition algorithm is trained on photographs which disproportionately show white men, and is consequently worse at recognising black women, 62 then it seems artificial to deny that impacted black women are being disadvantaged on grounds of race and sex where that algorithm is applied in a real-world context. 63
While these conundrums are far from straightforward, they do not challenge the legislative framework of discrimination law in significantly new ways. The ‘100 percent overlap’ question which Gerardis and Xenidis raise in relation to inherently discriminatory proxies has already been a vexed one, even in non-ADMS cases; 64 and tracking the upstream ‘reasons’ for a decision has always been complex—the difficulty of uncovering true reasons in human cases has simply obscured the challenge. In other words, although ‘algorithmic discrimination blurs the boundaries between the doctrines of direct and indirect discrimination’, 65 such blurriness has always existed. 66 Deeper concerns about the continued relevance of the distinction are not unique to algorithmic discrimination.
Attributing accountability
The fact that case law has centred on direct discrimination as a harm perpetuated by biased humans also gives rise to a second doctrinal conundrum: the attribution of accountability where bias becomes automated. Liability for AI-driven harms has already been the subject of policy discussion at EU level,
67
and scholars have begun to question the extent to which ‘current regimes of liability…are adequate to tackle algorithmic discrimination’.
68
Gerards and Xenidis highlight the ‘variety of different players…involved in the stages of algorithmic decision making’, and suggest that: [I]f at some point a discriminatory outcome is detected (for instance, because an algorithm systematically suggests that men should be promoted to a certain position rather than women), it may be very difficult for the victim of discrimination…to know whom to hold responsible, liable and/or accountable for that discriminatory outcome …(the developers, the sellers or the end user (…the HR service) of the algorithm).
69
Case law on direct discrimination does more frequently pinpoint a human wrongdoer. ‘Most employers are corporate and most acts of discrimination are done by individual employees or agents’, suggested the UK Employment Appeal Tribunal in one case, before promptly turning to the scope of employer liability for discriminatory human decisions (viz vicarious liability). 71 Under certain legal regimes, the need for a culpable individual may pose serious challenges: direct discrimination under Australian law requires a ‘person’ to engage in discriminatory treatment, and it has been suggested that this requirement cannot be satisfied where an ADMS makes a decision without any input from a natural person. 72 By contrast, the EU discrimination directives adopt a passive voice: the ‘principle of equal treatment’ means that there ‘shall be no direct or indirect discrimination’ when certain employment-related decisions are made. 73 While Union-level law does not specify ‘who should be held liable for discriminatory behaviour’, 74 Member State legislation generally prohibits discrimination on decisions such as promotions or pay, and it seems unlikely that liability is avoided where such discrimination is automated. 75 Where an employer's paper-based process involves deducting five points from any individual who attended a girls’ school, for example, the employer is liable for direct discrimination, regardless of the scorer's mental state. 76 In other words, primary liability is directly imposed on the employer. Where an employer directly discriminates via an algorithm rather than via a human, the analysis should be no different. 77
A straightforward approach holding employers liable where regulated decisions are affected by automated direct discrimination neatly sidesteps questions about whether liability is affected by the employer's relationship to the software (which might be developed in-house, purchased or licenced) 78 and whether liability should continue to attach if the employer has taken reasonable steps to prevent the discrimination. 79 The analysis is simplified: if a job applicant or worker has been denied a relevant opportunity by the employer on grounds of a protected characteristic (including where a decision-supporting algorithm has effectively decided on grounds of the characteristic), the employer is liable for direct discrimination. Where no direct discrimination has occurred, but the algorithm disadvantages members of the claimant's protected group, the primary question will be whether the employer can justify the use (within the indirect discrimination framework). The analysis thus promptly returns to the fundamental questions set by the discrimination framework. 80
Disability discrimination
The law on direct and indirect discrimination applies broadly to protected characteristics, 81 and the challenges set out above are likely to be encountered in a wide range of cases. Alongside those provisions, EU equality law also imposes a further duty on employers which requires consideration in the context of algorithmic decision-making: the duty to make reasonable accommodations for persons with disabilities. 82 Employers must take ‘appropriate measures’ in individual cases to enable persons with disabilities to access, participate, and advance in employment, 83 and this duty applies unless the measures would ‘impose a disproportionate burden on the employer’. 84
The significance of this duty in the context of algorithmic decision-making remains underexplored in the literature. 85 On the one hand, the individualised nature of the duty responds to the heterogeneous nature of disability. On the other hand, underrepresentation in datasets can disadvantage outliers, which is concerning where ‘most social structures and institutions treat disabled people…as “atypical” or “abnormal”’. 86 In this sense, the ‘heterogeneity of disability’ is ‘incongruous with the current paradigm of machine learning’, 87 in which predictive algorithms learn and apply patterns. This subsection explores how the application of the reasonable accommodations duty in a context of algorithmic decision-making may require system-level design changes as well as individualised adjustments on a case-by-case basis.
Disability is not defined in the Employment Equality Directive and was initially conceptualised by the CJEU as ‘a limitation which results in particular from physical, mental or psychological impairments and which hinders the participation of the person concerned in professional life’. 88 This approach was based on the medical model of disability, 89 which focuses on disability as an impairment which is independent of any external or environmental element. 90 The United Nations Convention on the Rights of Persons with Disabilities (CRPD), concluded some years after the Employment Equality Directive, marked a shift towards the social model, which looks beyond the individual to understand disability as being ‘based on the interactional relationship between people with impairments and the wider environment’. 91 The EU became a party to the Convention in 2010, and is bound by the CRPD obligations to the extent of its competences. 92 The definitional shift quickly fed into the EU acquis via the Court, with the CJEU drawing closely on the CRPD to reframe its definition of disability as a limitation which ‘in interaction with various barriers may hinder the full and effective participation of the person concerned in professional life on an equal basis with other workers’. 93
That disability is to be understood contextually is important from the perspective of algorithmic discrimination. Previous studies have identified system-level algorithmic discrimination against persons with disabilities, such as higher rates of facial expression misinterpretation, poorer performance of speech recognition tools, and the labelling of texts which mentioned disability as ‘toxic’ by natural language processing tools. 94 Addressing these impacts is critical, but system-level mitigations which seek to identify and reduce disparities across defined protected groups are not sufficient in the context of disability as currently conceptualised by the Court, for two reasons. 95
First, every disability is unique, and an individual ‘may fare poorly on an assessment because of a disability…regardless of how well other individuals with disabilities fare on the assessment’. 96 Second, under the social model, the definition of disability is itself contextual: if disability is seen as a product of a disabling environment, then a conception of disability discrimination which ‘abstract[s] away environmental and social context’ will ‘artificially limit[] categories of disability’ and ‘unintentionally…reinforce both the form and content of the medical model’. 97 In other words, system-level impacts may not be identifiable if it is the manner and context of the deployment which is ‘disabling’. The US Equal Employment Opportunity Commission (EEOC) points out, for example, that a tool which predicts performance in ‘typical working conditions’ might not accurately predict ‘whether the individual still would experience those same difficulties under modified working conditions’, such as in a setting with reduced sound. 98 In other words, impacts for persons with disabilities can arise both from the algorithmic management system itself and from the mode of its deployment, and will be impossible to comprehensively assess those impacts at the system level.
Where possible, individualised reasonable accommodations should include the deployment of in-built accessibility functionalities. In other words, the system-level design should be amenable to individual-level response. The EEOC suggests that in the context of algorithmic hiring software, for example, reasonable accommodations might include ‘extended time or an alternative version of the test, including one that is compatible with accessible technology (like a screen-reader)’. 99 The creation of such functionalities requires consideration at the design stage, and the reasonable accommodations duty can only impose an obligation to procure or develop algorithmic management software with accessibility functions if it can apply anticipatorily.
The UN Special Rapporteur on the rights of persons with disabilities does suggest that the reasonable accommodation obligation ‘may have an anticipatory dimension, in the sense that one should not have to wait for persons with disabilities to present themselves before considering what reasonable accommodation might be warranted’. 100 The accepted position under EU law, however, is that the duty is only understood to be ‘activated’ once the employer has knowledge of a particular person's disability. 101
At the same time, system-level design remains relevant when assessing employer liability. The use of a tool without accessibility functionalities might constitute indirect discrimination if its use unjustifiably disadvantages persons with disabilities and there are equally effective alternatives reasonably available; 102 and employers should bear this in mind when selecting or developing algorithmic tools. Moreover, although in-built accessibility functions will go some way to securing disability equality, they will not be sufficient for all individuals. Reasonable accommodation ‘might mean providing alternative…tools to accommodate applicants with disabilities’. 103
In short, although the pattern-identifying nature of machine learning algorithms poses concerns in the context of disability, the EU legal framework already recognises the need for employers to respond to the uniqueness, fluidity, and context-specificity of disability through the reasonable accommodations obligation. This is not to say that the law as it stands is sufficient to ensure equality: reasonable accommodations are contingent on disclosure; assessing the proportionality of accommodation is often difficult; and ‘opting out’ of an unsuitable assessment may not be a panacea if the alternative process is in practice inequivalent. 104 However, these issues—while pressing—are again not novel to the algorithmic management context.
Getting cases to court
A review of the doctrinal challenges across the equality framework reveals that the EU framework is remarkably robust. Where doctrinal issues arise, they are generally either long-standing unresolved questions, or stem from judicial interpretations of discrimination as a human phenomenon. Other challenges highlighted by scholars—such as the personal and sectoral scope of EU equality law—are not particular to algorithmic discrimination, but arise across the board. 105 In short, ‘[m]any of the issues relating to the discriminatory potential of algorithms are not unique to algorithms but are problems that have been a staple of antidiscrimination law’. 106 The fact that no new legislation has been adopted at Member State level may well reflect a general consensus that existing legislation ‘has a sufficiently wide material scope to cover most examples of algorithmic discrimination’. 107
If the law on the books is adequate, the focus turns to enforcement. Getting cases of algorithmic discrimination before courts is critical for two reasons. First, and most obviously, if equality law is not enforceable, then fundamental rights are not protected. Second, while the doctrinal issues examined above do not pose a novel threat to the framework of EU equality law itself, they do require judicial consideration—and case law will only be updated if cases are brought.
Algorithmic discrimination gives rise to a concerning paradox when it comes to enforceability. On the one hand, automating previously human decision-making processes can render discriminatory criteria more traceable and outcomes more quantifiable. The shift towards algorithmic management should therefore hold significant potential for addressing persistent inequalities in the labour market, including those which stem from implicit bias. 108 On the other hand, algorithmic decision-making processes are rarely transparent, and scholars consistently point to algorithmic opacity as the key barrier to litigation and enforcement action. 109 The ‘prevailing non-discrimination model’ in EU law is an ‘adjudicative adversarial system based on individual litigation’, which means that the ‘burden of uncovering discrimination…and bringing a case to court lies with the victim’. 110 Proving discrimination without access to information is nearly impossible, and cases on algorithmic discrimination remain rare. 111
In technical terms, some machine learning algorithms do operate in a ‘black-box’ manner, such that their functioning is impenetrable even to the developers. While technical opacity is often cited as a novel and intractable issue, the true prevalence of black-box algorithms in the employment context remains unclear. 112 Moreover, technical opacity does not render the law ineffective. 113 Knowledge of the algorithm's inner workings is not a prerequisite to bringing a successful case: if a claimant who can show that an algorithmic system is putting persons with their protected characteristic at a disadvantage, then they have a prima facie case of indirect discrimination; 114 and if they can show that a similarly situated individual without their protected characteristic would have received more favourable treatment, for example through a counterfactual simulation, then the two facts together should be sufficient to bring a direct discrimination claim. 115 In cases of indirect discrimination, if the employer is unable to explain the algorithm's workings, then a court could find that objective justification is impossible. 116 In cases of direct discrimination, the employer's inability to provide a non-discriminatory explanation for the outcome will result in a finding in favour of the claimant. 117
The problem is therefore not that technical opacity is an insurmountable challenge, but that potential claimants do not have access to the evidence necessary to bring a case. The burden is initially on the claimant to present facts from which discrimination may be presumed, 118 and EU law does not directly impose any disclosure obligations on employers. 119
Judicial provision of transparency
Once again, there is scope for judicial reinterpretation to go some way towards addressing this issue. Gerards and Xenidis suggest that the principle of effectiveness in EU law could provide a basis for national courts to ‘consider the refusal to disclose information [about algorithmic performance] as contributing to the establishment of a prima facie case of discrimination, thus shifting the burden of proof to the respondent.’ 120 The transparency issue is not new: the absence of any right to information has long created problems for claimants in discrimination cases, and one might ask whether the jurisprudence on this point will change in response to automation. 121 The harm of opacity is, however, pronounced in the algorithmic context, as discrimination is easier to detect in technical terms but less likely to be perceived from the perspective of the victim. 122
One avenue to enhanced transparency could be through a ‘joint reading’ of EU non-discrimination law and data protection law. 123 Grozdanovski argues in favour of such a reading, relying on Article 22 of the General Data Protection Regulation (GDPR), which provides for safeguards against solely automated decision-making with significant effect. Where an employer decides that making a significant decision—such as the denial of a bonus payment or selection for redundancy—on a solely automated basis is necessary for the performance of the employment contract, 124 then certain ‘suitable safeguards’ must be provided, including a data subject right to ‘obtain human intervention’ and ‘contest the decision’. 125
In many cases, however, employment decisions will not be solely automated: managers may consider algorithmically generated scores as one factor within a broader evaluation, for example, and Article 22 will not apply in such cases. Moreover, although transparency is a principle of data protection law and the GDPR has provided new access rights, data protection rights are highly individualised and will not furnish data subjects with the comparative information necessary to ground a data protection claim. 126 Class action lawsuits based on mass data access requests might appear to provide an alternative in theory, but face various barriers in practice. 127 In short, while stronger judicially created access rights for individual claimants would be helpful, they would not be a panacea.
Legislative reform
Access to information and evidence is critical to ensure that litigation at the individual and collective level can take place, and that public bodies can be empowered to act.
128
Expanded rights for potential claimants to
In short, the legislator should facilitate enforcement of existing equality law by putting information into the prospective litigator's hands: that is, by requiring the publication of information on algorithmic management tools in a specified format and to a specified standard. No current proposal under EU law meets that imperative. The proposed AI Act classifies algorithmic management systems as high risk and requires that certain documentation be created about the functioning and performance of such systems, 131 but does not furnish job applicants or workers with access to this documentation. 132 The proposed Platform Work Directive is stronger, providing platform workers (but not job applicants) with access to information about algorithmic management systems, including the ‘categories of decisions that are taken or supported’ by such systems and the ‘main parameters’ that the systems ‘take into account’. 133 The proposal does not contain any specific provisions on information about equality outcomes, however, and the absence of standards means that the information might not be sufficient for claimants to ground a case of discrimination.
One of the most promising legislative innovations in the context of algorithmic equality is the algorithmic impact assessment. 134 Employers could be required to publish impact assessments which provide the information necessary to identify and (where relevant) ground a claim of direct or indirect discrimination. The data protection impact assessment (DPIA) obligation—which is imposed by the GDPR and applies to many algorithmic management systems—provides a solid foundation for such a duty, but the DPIA is subject to no transparency requirements and does not set any concrete standard or format for evaluating rights impacts. 135 Meanwhile, the Platform Work Directive would require digital labour platforms to ‘monitor and evaluate’ the impact of algorithmic management systems, but the draft Directive does not particularise the format and publication of the results. 136
Given that algorithmic discrimination does not pose any insurmountable challenges to the statutory framework of EU equality law, legislative reforms should be aimed at securing effective enforcement. Mapping out the requirements for an adequate impact assessment must therefore be a priority for policymakers. Elsewhere in this issue, Anna Thomas and I propose an algorithmic management impact assessment obligation which would build in transparency via stakeholder consultation and publication of redacted assessments. 137
Conclusion
Reference was made at the start of this aricle to a proposal which would have amended UK non-discrimination law—the framework of which largely reflects EU equality law—to respond to algorithmic discrimination. In response to the Government's assertion that the law ‘already protects workers against direct or indirect discrimination by computer or algorithm-based decisions’, 138 the Opposition contended that the law was not translating into practice: the Minister's description of the labour market was described as ‘Panglossian’. 139 While the amendment proponent's critique took aim at the ‘sharp[ness]’ of the law, 140 the real problem is its enforceability. While ADMS brings new opportunities to uncover and challenge unlawful discrimination, man-made opacity is stymying successful litigation. It is this latter problem on which legislators should fix their attention. While the material scope of the Directives is flexible enough to allow for updating via careful judicial interpretation, a strong policy response is required to buttress these provisions with enhanced transparency. Such transparency should be achieved through the mandating of algorithmic impact assessments in the employment context.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article: I acknowledge funding from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No 947806). I am grateful to Jeremias Adams-Prassl, Halefom Abraha, Michael ‘Six’ Silberman, Johanna Wenckebach, Daniel Pérez del Prado, Tobias Müllensiefen, Dan Calacci, Simona de Heer, Lukas Hondrich, and Krishna Gummadi for discussions which informed this work.
