Abstract
Algorithmic recruitment systems are emerging in the EU job market. Such systems could technically rely on AI and automated decision-making, but it is unclear whether it is lawful. In addition to other rules, the ambiguously worded GDPR Article 22 regulates automated decision-making. It remains unresolved whether the main rule in GDPR Article 22(1) grants applicants a right not to be subject to automated decisions or prohibits employers from making automated decisions. Further, it appears undetermined as to what counts as automated decision-making under GDPR Article 22(1) and whether the GDPR Article 22(2) exceptions to the main rule apply in a recruitment context. This article examines the legal boundaries set by GDPR Article 22(1) and (2) on the use of automated decision-making in algorithmic recruitment systems. The aim is to clarify whether employers in the EU are allowed to use algorithmic recruitment systems with automated decision-making capabilities. The examination indicates that, even if deemed a prohibition, GDPR Article 22 does not completely disallow such systems. Instead, the analysis suggests that automated decision-making could be allowed for recruitment under the contractual necessity exception of Article 22(2)(a), for instance, in a case where it would be impossible to go through the abundance of applications by hand in a reasonable time and manner. However, the explicit consent exception of Article 22(2)(c) would only apply in an extremely limited number of recruitment cases, if ever. Consequently, it seems that regardless of the rather strict legal boundaries, algorithmic recruitment systems could utilise automated decision-making in certain limited cases and after diligent assessments. Automated decision-making could be worthwhile, for example, in mass scale recruitment processes which could not reasonably be handled without automation.
Keywords
Introduction
Artificial intelligence (AI) 1 solutions and other algorithmic systems have gained ground in numerous fields, including recruitment. 2 Typically, algorithmic recruitment 3 systems rely on complex machine learning algorithms. 4 These systems can assist human recruiters in various phases of the recruitment process or automate some parts of it completely. Algorithmic recruitment systems promise to accelerate recruitment processes, improve their accuracy and quality, reduce human recruiters’ workload and lower costs. 5 Employers could benefit from these systems, for instance, in the screening phase of the recruitment process, where diverse algorithms could evaluate applicants based on their resumés, applications or specific tests. 6 Algorithmic recruitment systems could easily rank applicants and exclude a majority of the applications without human involvement. 7
Hence, it is not surprising that algorithmic recruitment systems also involve notable risks. 8 For instance, applicants’ rights to data protection, privacy and non-discrimination may be at risk when these systems process vast amounts of applicants’ personal data and possibly make automated decisions concerning them. On a societal level, algorithmic recruitment systems may perpetuate existing inequalities. 9 Recent incidents of problematic recruitment algorithms 10 have shown that the risks are not only theoretical. Luckily, regulation to address these risks exists.
Algorithmic recruitment systems are already subject to numerous regulations issued at an EU level, and with the European Commission's recent proposal for an Artificial Intelligence Act (hereinafter ‘the AI Act’), 11 regulation appears to be increasing. In addition to the non-discrimination Directives 12 and varying national employment legislation, the General Data Protection Regulation 13 (hereinafter ‘the GDPR’) constrains the use of algorithmic recruitment systems in the EU. In particular, GDPR Article 22, which regulates automated individual decision-making, is currently one of the most important provisions constraining the adoption of AI and algorithms using personal data. 14 According to GDPR Article 22(1) the data subject 15 has ‘the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’. GDPR Article 22(2) allows exceptions to this general rule, stating that it does not apply if the decision is (a) necessary for entering into or performance of a contract, (b) authorised by Union or Member State law, or (c) based on the data subject's explicit consent; while GDPR Article 22(3) requires additional safeguard measures to be implemented in case automated decision-making systems are used exceptionally. Finally, GDPR Article 22(4) restricts automated decisions based on special category data. Apart from a few recent studies, the possible constraints GDPR Article 22 imposes on the use of automated decision-making in recruitment have not, as yet, been the subject of detailed research. 16
In prior automated decision-making research, the recruitment context has been noted briefly, often as an example. 17 While the general interpretations of automated decision-making provide answers to recruitment specific questions, the special characteristics of algorithmic recruitment, such as the imbalance of power, must not be overlooked. 18 Recruitment decisions potentially have a significant and long-lasting impact on applicants’ financial situation, sense of purpose, housing possibilities and quality of life; all of which tend to emphasise the risks of automated decision-making in recruitment. 19 These factors necessitate a somewhat higher protection in algorithmic recruitment. Therefore, the legal boundaries of automated decision-making in recruitment should be examined to assist European actors to navigate their way through the increasing number of algorithmic recruitment systems.
This article aims to clarify whether employers 20 in the EU are allowed to use algorithmic recruitment systems with automated decision-making capabilities. The analysis focuses on the legal boundaries set by GDPR Article 22(1), the main rules and Article 22(2), the exceptions to these rules, as these are the first hurdles employers should contemplate. Since automated decision-making seems particularly promising in the screening phase of recruitment, 21 a private employer's algorithmic screening tool to rank applicants is used as an example to test the legal boundaries throughout the article. The further safeguards required by GDPR Article 22(3) and limitations set out in Article 22(4) may also considerably confine the usability of automated decision-making in algorithmic recruitment. 22 These requirements must be considered only if automated decision-making is first deemed lawful under the general rule and its exceptions. Hence, these secondary requirements are excluded from the scope of this article.
The article analyses first, in Section 2, whether GDPR Article 22(1) grants applicants the right not to be subject to automated decisions or prohibits employers from making automated decisions. Thereafter, Section 3 assesses what kind of processing constitutes automated decision-making in algorithmic recruitment. Section 4 analyses the potential exceptions under which employers could justify the use of automated decision-making in algorithmic recruitment. The interpretations chosen in response to all these questions significantly affect employers’ possibilities to use automated decision-making in recruitment. Finally, Section 5 concludes the examination and claims that GDPR Article 22(1) and (2) do not completely disallow the use of automated decision-making in recruitment. However, automated decision-making seems to be lawful only in exceptional cases.
Automated decision-making in recruitment: Restricted if applicants so wish or generally prohibited?
GDPR Article 22 regulates automated individual decision-making. GDPR Article 22(1) states that ‘the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’, which seems to be open to various interpretations. 23 In algorithmic recruitment this could be read either as constituting an applicant's enforceable right not to be subject to automated decisions 24 or a prohibition which prevents employers from making automated decisions in general. 25 The chosen interpretation affects employers’ possibilities to utilise automated decision-making in recruitment as well as applicants’ protection when automated recruitment decisions are made.
Applicant's right not to be subject to automated decisions
As several scholars have remarked, GDPR Article 22(1) could be considered an enforceable right of the data subject not to be subject to automated decisions. 26 If this interpretation is adopted, employers may, in general, use algorithmic recruitment systems that resort to automated decision-making. However, if some applicants exercise the right not to be subject to automated decisions, then the employers must consider whether they can continue the automated decision-making. This may be possible, if some of the exceptions listed in GDPR Article 22(2) apply and the employers implement the additional safeguards required by GDPR Article 22(3) or the legislation referred to in Article 22(2)(b). If no exceptions apply, human recruiters must make decisions regarding the applicants who have exercised the right. Nevertheless, automated decision-making concerning other applicants can continue.
This interpretation finds support in varying arguments presented, among others, by Mendoza, Bygrave and Tosoni. Firstly, if interpreted literally, GDPR Article 22(1) seems to create a right. It explicitly refers to ‘a right’, whereas some other provisions, such as GDPR Article 22(4) 27 and Article 11(1) of the Law Enforcement Directive (hereinafter ‘the LED’), 28 are more clearly termed prohibitions. 29 Allegedly, Article 22(1) would have been worded unambiguously as a prohibition if it was intended as such. 30
Secondly, internal contextual reading of the GDPR could, to some extent, support this interpretation. For instance, Article 22 is placed in Chapter III titled ‘Rights of the data subject’, and other provisions, such as Article 12(2), refer to it regarding ‘exercise of data subject rights’. 31 However, this does not necessarily denote that Article 22 was intended as a right. Article 22(3) safeguards are phrased as data subject rights, which could explain the placement and references. 32 Moreover, the rights framing of Chapter III should not be overemphasised as other provisions in the chapter could also be read as establishing duties on controllers rather than conferring rights to data subjects. 33 Further, the title of Article 22 does not frame the provision as conferring a right, as do many other provisions of Chapter III. 34 In addition, this interpretation could render the consent derogation in Article 22(2)(c) meaningless since the data subject cannot object and explicitly consent to the same automated decision-making. 35
Thirdly, external contextual arguments could be presented in favour of this interpretation. The legislative history of the GDPR does not strongly indicate that Article 22(1) was meant to be a prohibition, as Bygrave and Tosoni have pointed out. 36 The wording of GDPR Article 22(1) is, in this regard, similar to its predecessor, Article 15(1) 37 of the former Data Protection Directive (hereinafter ‘the DPD’), 38 which, based on its legislative history, was intended as a right to object. 39 Thus, Tosoni has argued that if the legislator had intended GDPR Article 22 to be a prohibition, it would be visible in its drafting history or it would be worded otherwise. 40 Furthermore, this interpretation can be supported by external documents of the Council of Europe, which provide for rights and with which the GDPR should be harmonised. 41
By contrast, teleological arguments mostly contradict this interpretation. Whereas the main purpose of the GDPR appears to be the protection of natural persons’ fundamental rights, especially data protection and privacy, 42 GDPR Article 22 apparently aims, among other things, to protect the dignity of humans, 43 to protect them from the potentially negative effects of automated decisions, 44 to ensure the quality of the decisions 45 and to protect the right to due process. 46 It could be questioned whether the rights interpretation sufficiently fulfils these objectives. When this interpretation is chosen, employers are not obliged to assess the permissibility of automated decision-making under the exceptions of Article 22(2) or to implement the safeguards of Article 22(3) if the applicants do not exercise the right. Among others, Brkan has considered that this interpretation could lead to extremely unfavourable consequences for the data subjects if they fail to exercise the right. 47
In practice, several obstacles could prevent applicants from exercising this right. First of all, applicants may not be aware of the automated decision-making, even if they are informed as required by GDPR Article 13(2)(f). 48 Applicants might not have the time to read through all the information provided to them 49 or they may not understand it even if they read it. 50 Moreover, applicants may be oblivious of their right to object, as the employers are not obliged to inform them about it. 51 Even if applicants know of the right, they might not be otherwise prepared to intervene and object to the automated decision-making. 52 Objection requires active effort, which makes it easier to accept the default option. Furthermore, applicants could also fear losing their opportunity to be selected for the job if they use the right, and hence tolerate automated decision-making. The rights interpretation could lead to differing treatment of applicants based on their activity and thus impair the already marginalised applicants’ situation, if they do not, for instance, have the capability or resources to exercise the right.
A prohibition of automated recruitment decisions
The second interpretation considers GDPR Article 22(1) as a prohibition. If this interpretation is adopted, employers are not generally allowed to use automated decision-making in recruitment. However, the prohibition is qualified. Employers can employ automated decision-making if some of the exceptions listed in GDPR Article 22(2) apply and if safeguards required in GDPR Article 22(3) 53 are implemented. If no exceptions apply, or employers are unable to implement the required safeguards, human recruiters must make decisions concerning all applicants.
Various arguments support the prohibition interpretation, even though a purely literal reading of GDPR Article 22(1) seems to be against it. Firstly, several internal contextual arguments seem to endorse this interpretation. The structure of the Article 22, where paragraph 1 provides the main rule, paragraph 2 sets out exceptions to it and paragraph 3 details additional safeguards, supports the prohibition interpretation, according to Dreyer and Schulz. 54 Whereas Mendoza and Bygrave have noted that while termed as a right, Article 22(1) differs from other rights as it is not ‘a right to something’ but rather ‘a right not to be subject to a particular type of decision’. 55 Further, the section title ‘Right to object and automated individual decision-making’ could, according to the WP29, 56 imply that Article 22 is not a right to object similar to that in Article 21. 57 Moreover, Article 22 lacks an explicit duty to inform data subjects about the right to object, which is included in GDPR Article 21(4). 58 In addition, it could be argued that if intended as an enforceable right, Article 22(1) would have been mentioned in GDPR Article 13(2)(b) which requires controllers to inform data subjects about the other enforceable rights.
GDPR Recitals could also be read as supporting the prohibition. Pursuant to GDPR Recital 71, automated decision-making ‘should be allowed where expressly authorised by Union or Member State law […] or necessary for the entering or performance of a contract […] or when the data subject has given his or her explicit consent’. According to the WP29, this implies that automated decision-making under Article 22(1) would not be generally allowed. 59
Further, certain inconsistencies in the logic of GDPR Article 22 could arise, if it is not considered a prohibition. Firstly, the human intervention safeguard, required by GDPR Article 22(3), would be purposeless if Article 22(1) is deemed as a right. 60 Secondly, Pehrsson has pointed out that limiting the protection of special categories of data in Article 22(4) to decisions made under Article 22(2) exceptions would be questionable, if Article 22(1) was not intended as a general prohibition. 61 Thirdly, as mentioned in Section 2.1, the meaningfulness of the consent derogation could be questioned if the other interpretation is chosen. 62
Secondly, external contextual arguments can be presented to support the prohibition interpretation. In recruitment, this interpretation would be in line with section 5.5 of the International Labour Organisation's Code of Practice on the Protection of Workers’ Personal Data (1997), which states that ‘decisions concerning a worker should not be based solely on the automated processing of worker's personal data’. In favour of this interpretation in general, Brkan has referred to the whole data protection framework's coherence and, in particular, to the LED, which explicitly terms a similar provision as a prohibition. 63 However, Tosoni has questioned the coherence argument and has claimed that the differing nature of the GDPR and the LED, as well as the law enforcement authorities’ powers, justify the differences in the protection. 64
Thirdly, teleological arguments can be used to strongly promote this interpretation. Since the wording of GDPR Article 22 seems to be ambiguous, interpreting it in the light of its objectives 65 appears to have a good foundation. 66 The prohibition guarantees stronger protection to applicants by default, as the constraints envisaged in Article 22 always apply when automated decision-making takes place, regardless of the applicants’ actions. 67 Automated decision-making can be used in recruitment only if it falls under the exceptions mentioned in GDPR Article 22(2). Further, the applicants will always have the right to resort to the safeguards, such as challenging the automated decision made, even if they have not first objected to the automated decision-making. Hence, the enforcement problems inherent in the rights interpretation will not undermine the protection in this case. The prohibition can better and more uniformly protect applicants’ fundamental rights and guard applicants against the specific risk related to automated decision-making. Consequently, it will conform better to the aims of the GDPR in general and the Article 22 more precisely. 68
Nevertheless, consequentialist counterarguments can also be presented. Tosoni has argued that this interpretation could impede the development and use of automated decision-making. 69 Undoubtedly, the prohibition would restrict employers’ leeway more than interpreting GDPR Article 22(1) as giving rise to a right, since some of the exceptions of GDPR Article 22(2) would always have to apply and safeguards of GDPR Article 22(3) would have to be implemented to make the automated decision-making lawful. However, after assuring that automated decision-making is lawful regardless of the prohibition, employers could utilise automated decision-making consistently to all applicants. Thus, the process could be simpler and more predictable for both employers and applicants.
Noteworthy arguments for both interpretations have been presented and the debate is not finally settled as the Court of Justice of the European Union (hereinafter ‘the CJEU’) has not yet taken its stand on the matter. 70 For the time being, the prohibition interpretation, which sets stricter legal boundaries for automated decision-making and provides higher level of protection, seems preferable, especially in algorithmic recruitment. Considering the specific risks related to recruitment and the possibility of tough sanctions for breaching the GDPR, 71 the prohibition interpretation may be the safest choice for both applicants and employers until the courts or the legislator have clarified the interpretation. Thus, in this article, the presumption is that Article 22 creates a qualified prohibition.
Even if Article 22(1) is later interpreted as giving rise to a right, it remains relevant to know what is considered to be automated decision-making and whether the Article 22(2) exceptions apply in algorithmic recruitment. These questions will be addressed hereinafter.
What is prohibited automated decision-making in algorithmic recruitment?
Since GDPR Article 22(1) prohibits only automated decision-making, employers should be able to assess whether algorithmic recruitment systems make automated decisions. This may be more arduous than at first sight, as the definition of automated decision-making appears to be open to various interpretations. This section aims to clarify what kind of processing constitutes automated decision-making in algorithmic recruitment.
Common criteria for automated decision-making
The definition of automated decision-making, derived from GDPR Article 22(1), is crucial in drawing the line between prohibited automated recruitment decision-making and other processing of applicants’ personal data. Common criteria for automated decision-making, which several scholars seem to acknowledge in one form or another, are that (i) a decision is made, (ii) which is based solely on automated processing, and (iii) which has legal or similarly significant effects on the data subject. 72 If these three criteria are simultaneously fulfilled, the processing can be considered as automated decision-making. Nonetheless, there seem to be different ways to interpret each of the criteria and also other criteria have been proposed.
The first criterion is that a decision is made. 73 The GDPR does not specify whether the decision has to be final, or whether an intermediate or individual step in the automated process suffices, as Kamarinou et al. have remarked. 74 If interpreted narrowly, only the final decisions would count as decisions under Article 22(1). In a broad interpretation, which seems to prevail in previous literature, intermediate actions of wider processes could also be deemed to be decisions. 75 The legislative history of the GDPR could support this interpretation, as under the DPD ‘a decision’ was interpreted broadly. 76 According to GDPR Recital 71 a decision may include ‘a measure’. Further, it mentions ‘e-recruiting practices’, and not e-recruitment decisions, as one example of decisions. Typically, e-recruiting practices cover several steps of the recruitment process that precede the final recruitment decisions, such as job advertisements or screening. 77 Thus, Recital 71 could imply that decisions, as meant by Article 22(1), could also be made in the intermediate steps of the recruitment process.
The broad interpretation would better protect applicants’ personal data and other interests related to automated decisions, as it would cover also the intermediate decisions, which may already have significant effects. For instance, automated screening systems may provide intermediate applicant rankings, on the basis of which most applicants are automatically rejected. If the broad interpretation is adopted, more algorithmic recruitment systems will fall under the scope of Article 22(1) prohibition than in the narrow interpretation. If the decision criterion is interpreted broadly, the different phases of the recruitment process may constitute automated decision-making independently of each other and thus should be assessed separately in the light of the criteria for automated decision-making.
The second criterion that the decision is based solely on automated processing has been debated more in the previous research. The main lines of interpretation can be divided into a narrow and a broad approach. 78
The narrow approach rests on a literal 79 and formalistic 80 interpretation of the wording ‘based solely on automated processing’. The word ‘solely’ would suggest, according to Wachter et al., that a strict reading has been intended. They refer also to the legislative history, namely, to the rejected broader wording proposed by the European Parliament, which would have included decisions based ‘predominantly’ on automated processing. 81 Hence, Wachter et al. have considered that the wording permits an interpretation where ‘nominal human involvement’ or ‘any human involvement’ is sufficient to make processing not solely automated. 82 Along the same lines, Zarsky has considered that introducing ‘very minimal human interaction’ would suffice to circumvent Article 22(1). 83 If this approach is adopted the processing is solely automated only when there is no human involvement whatsoever in any phase of the process. For instance, in recruitment this interpretation would mean that the processing is not automated decision-making if the human recruiter, without thinking, pushes a button and accepts machine-made applicant rankings. 84
In contrast, the broad approach appears to depart from the strict literal interpretation. In this approach decisions are deemed solely automated if humans are only nominally involved. However, it is unclear what kind of more substantial human involvement is required to make the second criterion inapplicable. For instance, Mendoza and Bygrave consider that processing will be solely automated if the human involved in the processing ‘fails to exercise any real influence on the outcome of the decision-making process’. 85 Such a failure could occur, for example, if the human does not evaluate the automated results before a decision based on them is finalised. 86 On a similar note, the WP29 seems to consider that decisions are solely automated if the human involvement is fabricated or the human oversight is not meaningful. According to the WP29 meaningful human oversight seems to require that the human has ‘the authority and competence to change the decision’ and to examine all the relevant data. 87
Systematic and teleological arguments seem to support the broad approach. Since there is nominal human involvement at some point of most automated processes, 88 the scope of automated decisions, and thus protection under Article 22, will be very limited if the narrow approach is chosen. The functionality and effectiveness of the provision under the narrow approach may be questioned, as it could be doubted whether Article 22 would ever apply. In comparison, the broad approach places more automated processing situations within the scope of Article 22 and, thus, gives it more effect. It limits the scope of allowed automated decision-making much more than the narrow approach. For instance, many algorithmic recruitment systems could be considered solely automated, since the humans might not be more than nominally involved due to lack of time, limited technical understanding, non-transparent systems or over reliance on the machines. 89 In line with the objectives of the GDPR and Article 22 more specifically, 90 the broad approach provides stronger protection for the applicants.
The third criterion requires that the decision has legal or similarly significant effects on the data subject. The first part of this criterion, the legal effects, does not seem to induce interpretative debate. Several scholars appear to consider that the decision has legal effects if it changes, shapes or determines the legal rights and duties or the legal status of the data subject, 91 whereas the second part of the criterion, the ‘similarly significant effects’, leaves room for more interpretation. While legal effects could be objectively assessed, the assessment of significant effects could require consideration of more subjective elements as the significance of the effects could vary between data subjects. 92 In recruitment, the employers are typically unaware of the applicants’ specific situations and have to consider the potential higher-level effects of the decisions before the processing is initiated.
Varying ways of outlining the similarly significant effects in general have been proposed. For instance, Mendoza and Bygrave seem to consider that the similarly significant effects should have a ‘non-trivial impact on the status of a person relative to other persons’. 93 The WP29 appears to perceive that a decision of similarly significant effects should have (i) the potential to significantly influence the ‘circumstances, behaviour or choices of the individuals’; (ii) an extended or standing impact on the data subject; or (iii) lead even to ‘the exclusion or discrimination of individuals’. 94 Dreyer and Schulz have noted that the economic and practical importance of the decision and permanence of adverse effects are central in the assessment. 95 Likewise, Mendoza and Bygrave have highlighted that the more adverse the effects, the more likely they are to be significant for the data subject. 96 Hence, merely emotional, 97 troublesome or inconvenient effects 98 will probably not cross the threshold. However, some recruitment practices could have similarly significant effects on the applicant. 99 For example, decisions that deny an employment opportunity or position applicants at a serious disadvantage could, according to the WP29, have similarly significant effects. 100
Automated recruitment decisions could have legal effects, similarly significant effects or neither, depending on the situation. A decision has legal effects on the applicant, if the applicant is provided with an employment opportunity and an employment contract is entered into. In the screening phase this is a rare outcome, as typically even the most successful applicants are subject to further testing and interviews, based on which the employer makes the final decisions. Thus, screening decisions rarely have legal effects. However, screening decisions will likely have similarly significant effects on the applicants, as they can affect applicants’ choices and have significant impact on their economic situation, welfare and circumstances in life, even for an extended period. The significance of the effects can differ considerably depending on the applicants’ current employment and financial status. Further, the effects might not be as significant if there is a positive outcome from the screening decision for the applicant. When the decision possibly affects even some of the applicants significantly, it is safer to consider that the third criterion is fulfilled.
Other proposed criteria for automated decision-making
In addition to the three most commonly accepted criteria for automated decision-making introduced above, other criteria, such as the processing of personal data, 101 the individuality of the decisions 102 and profiling, 103 have also been proposed. These other criteria will be briefly presented and analysed in this subsection.
The first additional criterion is that the decision must be based on personal data. 104 This requirement emerges directly from the scope of application of the GDPR 105 and is a prerequisite for the applicability of the GDPR. 106 Hence, this criterion seems natural, but superfluous when assessing the special requirements set for automated decision-making. Consequently, it is understandable that this criterion has not been separately acknowledged in most accounts.
The second additional criterion is that the decision must be individual. This would leave collective decisions out of the scope of GDPR Article 22 protections. Brkan has noted that such a reading could be justified by the general scope of application in GDPR Article 1(1), the ratione personae and the textual interpretation of Article 22. However, after considering the imbalance and loopholes such an exclusion would create, Brkan proposes that collective automated decisions should be included in the scope of Article 22, for instance, by considering a group decision as a cluster of individual decisions. 107 Likewise, Mendoza and Bygrave seem to interpret Article 22, and particularly its reference to the ‘data subject’, as meaning that the decision has to be targeted at a person. 108
In recruitment, the decisions are often targeted at individual applicants. For instance, if in the screening phase certain groups are excluded based on political activity or ethnicity, the decision could be seen as targeted at such applicants, who belong to those groups. Nevertheless, the situation will be more complicated if those groups are already excluded in the job advertising phase, 109 as there might not be any identifiable applicants. 110 If no potential applicants are identifiable based on the data, the GDPR will not apply, since it applies only to the processing of personal data. Consequently, the requirement of individuality of the decision does not appear to be decisive in recruitment.
The third, and possibly the most relevant, additional criterion is that the decision must be based on profiling. According to GDPR Article 4(4) ‘“profiling” means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’. This criterion is understandable, as GDPR Article 22(1) explicitly mentions profiling. 111 In support of this criterion Sartor and Lagioia have referred to GDPR Recital 71, which states that ‘such processing includes “profiling”’. 112 Furthermore, GDPR Recital 73 mentions ‘decisions based on profiling’ when referring to Article 22. Mendoza and Bygrave have considered that the preparatory works and prior versions of the GDPR, which focus particularly on profiling, also promote this criterion. 113
In contrast, valid counterarguments against this criterion have also been presented. For instance, Bygrave has, in his later writings, considered that commas have been used in Article 22(1) and its title to separate profiling from other automated processing in order to indicate that profiling is optional, unlike in the DPD Article 15. 114 Further, if this criterion is accepted, the mention of ‘automated processing’ in Article 22 will be superfluous. 115 Teleological counterarguments can also be presented. The profiling criterion adds one more hurdle for the applicability of GDPR Article 22, thus, limiting its scope and the protection it provides. Hence, the objectives of the GDPR will possibly be better achieved if this criterion is not included. Albeit profiling often precedes automated decision-making, 116 that is not necessarily always the case. 117 In algorithmic recruitment automated decisions can also be made without profiling. For instance, screening decisions could be made simply based on qualifications, without further evaluation of applicants’ personal data. Even though such decisions might not be as risky or problematic as decisions based on profiling, such simple automated decisions could also have significant negative effects on applicants and would thus deserve protection under GDPR Article 22.
While all the additional criteria are conceivable, the criteria of personal data processing and individuality of the decisions are not included in the specific criteria for automated decision-making, since the general scope of the GDPR already seems to cover them. Moreover, the teleological and systematic arguments seem to question the profiling criterion. Until the ambiguities of GDPR Article 22 are clarified by the CJEU or the legislator, it may be safer for the employers to adopt a reading which provides broader protection.
When algorithmic recruitment systems resort to automated decision-making
Before implementing any algorithmic recruitment systems, employers should examine whether the system utilises automated decision-making. This can be done by assessing whether the following three criteria are simultaneously fulfilled in some phase of the algorithmic recruitment process, namely, whether (i) a decision is made, (ii) which is based solely on automated processing, and (iii) which has legal or similarly significant effects on the applicant. 118 However, the exact contents of these criteria seem to remain unconfirmed. In recruitment process in particular, where applicants are typically in a subordinate position and in need of greater protection, a teleological and broader interpretation of the criteria may be justified. In such an interpretation the decision criterion would include intermediate decisions, such as screening decisions. The second criterion would encompass automated processing, where humans are only nominally involved and do not actively influence the outcome of the decision-making. Finally, the third criterion would cover, for instance, decisions which significantly influence applicants’ opportunities to proceed in the recruitment process.
Some parts of the algorithmic recruitment systems could rather easily fulfil all the criteria and thus be deemed to use automated decision-making. This could be the case for instance, if (i) algorithms screen applications and provide rankings, and (ii) human recruiters continue the process based on the rankings, without considering the merits of the applications themselves, which (iii) leads to denying some applicants an employment opportunity or otherwise significantly influencing their opportunities to proceed in the recruitment process. When the algorithmic recruitment system utilises automated decision-making employers must ensure that it is permitted under some of the Article 22(2) exceptions before they proceed.
Nonetheless, not all algorithmic recruitment systems necessarily utilise automated decision-making. If all the criteria (i-iii) do not simultaneously apply at some stage of the recruitment process, that stage does not constitute automated decision-making and is not prohibited under Article 22(1). Automated decision-making is not used, for example, if a screening algorithm is employed only for assisting human recruiters in the process so that humans consider all the applications themselves, take into account the algorithm's recommendations, but make the final decisions as regards which applicants they invite to an interview independently of the algorithm, and, at times, also diverge from its recommendations.
Could automated decision-making be exceptionally allowed in recruitment?
Although algorithmic recruitment systems quite often include prohibited automated decision-making, they may nevertheless be feasible, since GDPR Article 22(2) contains exceptions that could potentially allow automated decision-making in recruitment. The exceptions are (a) necessity for entering into or performance of a contract, (b) authorising legislation with suitable safeguards and (c) explicit consent. To the author's knowledge, there is no such authorising legislation at an EU level that would allow automated recruitment decision-making. 119 Hence, this section focuses on the two exceptions, the necessity for entering into a contract and explicit consent, both of which could possibly apply in algorithmic recruitment.
Can automated decision-making be necessary for entering into an employment contract?
The first potentially applicable exception is contractual necessity. According to GDPR Article 22(2)(a), the prohibition of Article 22(1) does not apply if the decision ‘is necessary for entering into […] a contract between the data subject and a data controller’. Neither the preparatory works nor Recitals of the GDPR elaborate this exception. 120 Thus, it remains unclear how this exception should be interpreted and whether automated decision-making may, under some circumstances, be necessary for entering into an employment contract.
The requirement of contractual necessity in GDPR Article 22(2)(a) is not unique or novel. A similar requirement concerning lawful grounds for processing existed, for instance, in DPD Article 7(1)(b), and can now be found in GDPR Article 6(1)(b). The case law of the CJEU indicates that the necessity requirement should be interpreted strictly. 121 Moreover, the data protection authorities have supported a narrow interpretation of necessity. 122 This strict approach could be the starting point also when interpreting the contractual necessity exception of Article 22(2)(a).
Scholars and authorities have presented varying interpretations of the contractual necessity exception. Mendoza and Bygrave have deemed that the exception should not require indispensability, as the processing seldom actually has to take place without humans. 123 Thus, in order to fall under the scope of Article 22(2)(a) exception, automated decision-making should be ‘required for the purpose of entering into […] the contract with the data subject’. 124 Brkan has stated that the necessity should be perceived ‘as an “enabling” requirement for the conclusion of a contract’. 125 According to Sartor and Lagioia, automated decision-making may be necessary for entering into a contract, for instance, due to a high number of cases and the capacity of machines to outperform human judgment. 126 The WP29 seems to consider that automated decision-making for contractual purposes or pre-contractual processing may be needed, for instance, if the quantity of the data to be processed makes regular human involvement impractical or impossible. However, it emphasises that automated decision-making will not be necessary if other ‘effective and less intrusive means’ to achieve the same result exist. 127
The applicability of the contractual necessity exception in recruitment is open to debate. Sartor et al. and Grozdanovski have considered that the contractual necessity exception can enable automated decision-making in recruitment. 128 Likewise, the WP29 seems to accept this view, at least under certain circumstances, since it provides an example where automated decision-making is deemed necessary in e-recruitment. 129 Even though Sanchez-Monedero et al. seem to find it unlikely that the contractual necessity exception would apply in recruitment, they acknowledge that in certain cases, where there are many thousands of applications, automated decision-making could be required and not just beneficial. 130 While Kelly-Lyth has not completely excluded the applicability of the contractual necessity exception, she has highlighted the importance of demonstrating the necessity of automated decision-making when the recruitment process has been handled without it in the past. According to her, the exception could apply, for instance, if, due to the volume of applications, the employer cannot process them without fully automated means, and previously the employer has been compelled to utilise gate-keeping measures such as screening out applicants who have graduated from less well-respected universities. 131
In recruitment, a balanced interpretation of the contractual necessity exception could be advocated. This interpretation would acknowledge the legislator's apparent aim to limit the scope of the exception by giving weight to the necessity criterion. Thus, this interpretation would also support the protective purpose of the GDPR. Simultaneously, the balanced interpretation would aim to preserve the functioning of the exception by enabling automated decision-making in certain cases. 132 Automated decision-making could be considered necessary, for instance, if it would be practically impossible or unreasonable to handle the applications by other less privacy- and rights-intrusive means.
Employers must carefully and objectively assess whether the contractual necessity exception applies in algorithmic recruitment. Employers must examine possible alternatives 133 for conducting the recruitment process without automated decision-making, taking into account their effectiveness, 134 capabilities 135 and the effects on applicants’ privacy and other fundamental rights. 136 Employers’ circumstances as a whole could probably be taken into consideration in the assessment, as they may affect, for instance, the other available processing options. The assessment could note, among other considerations, the number of applications expected, the size of operations, the number of personnel, the total number of applications received per year and the financial resources. More case-specific factors could also possibly be considered, such as the schedule of the recruitment process and number of simultaneously vacant positions.
Automated decision-making may be considered necessary for entering into an employment contract if it would be practically impossible or unreasonable to handle certain parts of the recruitment process without it by other less privacy- and rights-intrusive means. This could be the case, for instance, in the following scenario: An aspiring start-up company, with basically no current revenue, modest financing and only a handful of employees, is opening a new logistics facility where it needs 500 employees within a month's time. The facility is located in an area with a high number of potential applicants and the company is expanding amidst a recession. Thus, it expects to receive thousands of applications per position. In such a situation, it could be deemed impossible or impracticable to go through thousands of applications by hand in a reasonable time and manner. Hence, for example, the automated screening of applications could possibly be deemed necessary for entering into employment contracts. However, some other parts of the recruitment process, such as interviews, could possibly still be conducted without automated decision-making.
Can applicants give explicit consent to automated decision-making?
The second potentially applicable exception is explicit consent. Pursuant to GDPR Article 22(2)(c), the prohibition of Article 22(1) does not apply if the decision ‘is based on the data subject's explicit consent’. Article 22(2)(c) demands that the consent is ‘explicit’, but it does not further elaborate on that requirement. 137 According to the EDPB guidelines on consent, it should be interpreted as requiring ‘express statement of consent’. 138 Based on the guidelines, explicit consent could be obtained, for instance, through a signed written statement, electronic signature, filled electronic form, two stage verification or an email. 139 Since many of these measures to obtain an explicit consent could technically also work in algorithmic recruitment, the requirement of consent being explicit probably does not cause serious problems. Instead, the general consent requirements set out in GDPR Articles 4 and 7 may prove to be more problematic in algorithmic recruitment.
GDPR Article 4(11) states that consent ‘means any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her’. GDPR Article 7 sets further conditions for consent. It requires, among other conditions, that the controller must be able to show that a consent has been given. 140 Further, it states that the consent may be withdrawn at any time and as easily as it was given. 141
The first, and possibly the major, question mark is around whether the applicants are in a position to freely give their consent. GDPR Recital 43 indicates that consent will not be valid if there is a clear imbalance between the data subject and the controller. Generally, an imbalance of power exists between the employer and the applicants. 142 In mass-scale low-level recruitment in particular, where algorithmic recruitment systems often are used, 143 applicants are typically in a subordinate position. Thus, due to the imbalance of power, it is unlikely that an applicant's consent is freely given. 144 However, in theory, it is possible to imagine recruitment situations where no such imbalance of power exists, which would render the applicant's consent invalid. This could be the case, for instance, if there is a shortage of qualified applicants and there are only a few experts, who several employers want to hire.
Nonetheless, even if an imbalance of power exists, it does not mean that the applicant's consent can never be freely given. 145 Applicants are free to give their consent if they have a real opportunity to choose whether they consent to the automated decision-making or not. 146 Applicants must not feel compelled to consent 147 or face any negative consequences for not consenting. 148 Further, applicants must also be free to withdraw their consent at any time. 149 Consequently, if the employer relies exceptionally on the applicants’ consent, it should be able to demonstrate that the refusal of consent and subsequent processing of the application by human recruiters will not have any negative consequences for the applicants, such as discrimination or clearly longer processing times. 150
The second question mark is around whether applicants can give informed consent. The consent may be informed only if the applicant has been transparently 151 notified of factors that are necessary for deciding whether to consent or not. 152 In addition to general issues, such as the purpose of the processing, the type of data processed and the right to withdraw consent, the applicant should be informed of automated decision-making. 153 Several scholars seem to consider that the information listed in GDPR Article 13(2)(f), including the existence of automated decision-making, meaningful information about the logic involved as well as the significance and the envisaged consequences of automated decision-making for the applicant, should be provided before consent is given. 154 Based on the information, applicants should be able to assess the significance of the consent and understand what they are agreeing to. 155
Even if all the required information is provided, there are certain practical difficulties that could prevent consents from being truly informed. Firstly, due to the technical complexity of automated decision-making, applicants may not understand what they are consenting to and what the risks of automated decision-making are. 156 Secondly, the lack of time or energy to attentively examine the information provided can exacerbate this problem. 157 Hence, applicants might not factually be informed, even if they are provided with the required information. Consequently, it may be difficult for employers to demonstrate that the consent has been informed. 158
The requirements that consent must be specific and unambiguous are closely linked to the above-discussed preconditions. To comply with the specificity requirement, the employer should clearly clarify and inform the applicants for which data processing operation and purposes the consent is required. 159 In order to be unambiguous, the process should be designed so that the active notion or declaration asked of the applicant leaves no doubt as to whether consent is granted or refused. 160 The fulfilment of these two requirements could depend more on the way employers obtain consent. However, the complexity of algorithmic recruitment processes also complicates the fulfilment of these criteria. 161
To conclude, it seems that the possibilities for applicants to freely give or refuse consent after being sufficiently informed are limited. 162 The consent exception could possibly apply in certain extraordinary cases, such as in recruiting exceptionally talented and technically knowledgeable professionals, who have leverage as regards the employer due to the job market situation and understand the automated process as well as the consequences of consenting. However, even in such situations the applicants’ have a right to withdraw consent under GDPR Article 7(3), 163 which makes consent a faltering basis for automated decision-making. If employers exceptionally rely on explicit consent as a justification for automated decision-making, they must have a backup plan for situations where applicants decide to refuse consent or withdraw consent at a later stage in the process. Such backup plans could curtail the efficiency benefits of automated decision-making. Consequently, employers should not put much weight on the consent exception, as in most algorithmic recruitment cases it is likely to be invalid, and at any rate, it is uncertain.
Conclusions
The aim of this article was to explore whether employers in the EU are allowed to use algorithmic recruitment systems with automated decision-making capabilities. The analysis focused on the legal boundaries GDPR Article 22(1) and (2) set on automated decision-making in algorithmic recruitment. As the wording of the GDPR Article 22 is ambiguous and case law on its interpretation is lacking, the legal limits of automated decision-making remain unsettled. Thus, the conclusions presented herein tentatively propose how the legal boundaries set by GDPR Article 22(1) and (2) could be interpreted and assessed especially in algorithmic recruitment. The analysis suggests that GDPR Article 22 does not completely disallow the use of automated decision-making in recruitment, even if a broader reading of Article 22(1), which aims to promote the protection of applicants’ fundamental rights, is adopted. However, the possibilities to use automated decision-making in algorithmic recruitment seem to be fairly limited.
The legal boundaries set by GDPR Article 22(1) differ greatly depending on the way it is construed. If deemed as a right, automated recruitment decisions will be generally allowed. Only after an applicant invokes their right, will the automated decisions concerning that applicant be restricted. However, when interpreted as a prohibition, automated recruitment decisions will principally be prohibited and allowed only under limited exceptions. While both interpretations appear conceivable in theory, it seems justified to interpret GDPR Article 22(1) as establishing a prohibition of automated decision-making, especially in recruitment. Given the imbalance of power inherent in recruitment and the possibly long-lasting impact of recruitment decisions, the applicants require stronger protection. The prohibition interpretation could provide this, since it applies uniformly by default and does not require active measures on the part of applicants. Hence, it could better protect applicants’ fundamental rights and be teleologically justified. Considering the potential for tough sanctions under the GDPR, 164 this interpretation could also be the safest for employers. Nevertheless, the prohibition interpretation could remarkably limit the potential use cases of algorithmic recruitment systems with automated decision-making capabilities.
However, not all algorithmic recruitment systems utilise automated decision-making. Automated decision-making can be considered to exist when in some part of the algorithmic recruitment process (i) a decision is made, (ii) which is based solely on automated processing, and (iii) which has legal or similarly significant effects on the applicant. While differing interpretations of these criteria have been proposed, in the recruitment context a teleological and broader interpretation again appears justified, as it provides better protection and meets the objectives of the GDPR and Article 22 more precisely. When such an interpretation is adopted, intermediate decisions may also count as automated decision-making if they significantly affect applicants’ opportunities to proceed in the recruitment process, and human recruiters are only nominally involved without influencing the outcome of the decision. Since only some phases of the recruitment process, such as screening or interviews, may constitute automated decision-making, the different phases of the recruitment process must be assessed separately in the light of the criteria. For instance, algorithmic screening, which directly determines whether the applicants are invited to interview, could be considered as prohibited automated decision-making under GDPR Article 22(1). An algorithm's hiring recommendation made based on video interviews conducted by human recruiters, however, might not count as automated decision-making if the human recruiters make the final selection decisions independently of the algorithm, and at times also diverge from its recommendations.
Even if algorithmic recruitment systems utilise automated decision-making, they are not inevitably unusable. GDPR Article 22(2) provides for two exceptions, contractual necessity and explicit consent, under which automated decision-making in recruitment may be permitted. The applicability of the exceptions should be considered separately in each phase of the recruitment process, where automated decision-making may be used, as these exceptions may apply in some part of the recruitment process, but not necessarily in others.
Under the contractual necessity exception of GDPR Article 22(2)(a) automated decision-making may be permitted if it is necessary for entering into an employment contract. The necessity criterion limits the exception's applicability and requires employers’ careful prior analysis. Employers may utilise automated decision-making only if they have concluded that no other less privacy-intrusive alternatives for conducting the recruitment process exist. This could be the case, for example, in the application screening phase, if there is an abundance of applications, which could not be sorted through by hand in a reasonable time and manner. However, some details of the recruitment process, such as the number of applications, may only be forecasted at the time of analysis. If the actual circumstances deviate from the preliminary assumptions in any relevant regard, the employer may be obliged to refrain from automated decision-making even if initially deemed necessary.
The explicit consent exception of GDPR Article 22(2)(c) seems to apply even more rarely. In order to be valid, consent should be freely given, informed, specific and unambiguous. 165 Due to the power imbalance, applicants are seldom in a position where they can freely give or refuse their consent, which may render their consent invalid. Moreover, the complexity of the automated decision-making processes questions whether consent can be informed. In theory, the explicit consent exception could possibly apply, for instance, if the employer is recruiting sought-after professionals, who have considerable leverage as regards the employer and are technically sufficiently knowledgeable to understand the automated process and the consequences of consenting. However, in such cases there would typically not be many potential applicants and automated decision-making might not be an economically viable option. Furthermore, consent is revocable, which makes this a wavering exception.
The qualified 166 prohibition 167 of automated decision-making 168 set by GDPR Article 22(1) and (2) considerably restricts the possibilities for employers to adopt algorithmic recruitment systems with automated decision-making capabilities. Regardless of the restrictions, such systems could be worth adopting, for instance, in mass-scale recruitment processes where the number of applications makes them impossible to handle without machines. Even though the exact boundaries of automated decision-making remain unclear and employers may be forced to analyse the systems based on incomplete information, the restrictions set by GDPR Article 22(1) and (2) should be considered for the first time prior to procuring or developing algorithmic recruitment systems. However, these limitations are only a part of the challenge.
In addition to the legal boundaries discussed in this article, there are several other significant regulatory hurdles which algorithmic recruitment systems must overcome. Within the sphere of data protection, for example, the special safeguards required by GDPR Article 22(3) create a set of obligations for employers, which they must note when designing and developing algorithmic recruitment processes. Further, the use of special category data when developing and deploying algorithmic recruitment systems is limited by the GDPR. Similarly, general data protection and privacy-by-design rules may impose remarkable constraints on the use of algorithmic recruitment systems, whether they rely on automated decision-making or not. Besides data protection, there are several other rule complexities that are highly relevant in algorithmic recruitment. Where data protection rules regulate the processing of personal data and decision-making mainly as a procedural matter, for instance, non-discrimination rules aim to also affect the substance of the decisions. The non-discrimination rules set their own boundaries for algorithmic recruitment systems, which cannot be overlooked. Decisions made by algorithmic recruitment systems must not discriminate; otherwise, employers may find themselves amidst complex discrimination claims, where applicants’ procedural rights also come into play.
Furthermore, the existing legal boundaries for algorithmic recruitment systems may be supplemented by the proposed AI Act in the future. 169 When compared to the GDPR the boundaries outlined in the AI Act differ in several aspects. For instance, the AI Act's requirements would fall mainly on the providers of high-risk AI systems and only to a limited extent on employers as users of the AI systems. Thus, it would primarily protect applicants against the harmful developments of AI system providers in algorithmic recruitment systems. In contrast, the GDPR covers relationships between employers and applicants more broadly. Further, the AI Act would not incorporate as detailed practical requirements as the GDPR. For instance, informing applicants of the processing and possible use of AI and automated decision-making would continue to be based on the requirements of the GDPR. 170 Moreover, the GDPR sets a qualified prohibition on automated decision-making, whereas the AI Act would only require certain measures to be taken when developing and operating such systems. Consequently, even if the AI Act would be adopted, GDPR Article 22 could continue to be the main constraint for the use of automated decision-making in recruitment.
To conclude, the legal boundaries set by GDPR Article 22(1) and (2) constitute the first line of obstacles in evaluating the lawfulness and feasibility of algorithmic recruitment systems with automated decision-making capabilities. If a system is clearly unlawful using this analysis, there is no need to further evaluate it based on other legislation.
Footnotes
Acknowledgements
The author would like to thank Assistant Professor Annika Rosin (University of Turku) and Associate Professor Mika Viljanen (University of Turku) for their helpful and constructive comments on the earlier versions of this article. Any errors or omissions remain the responsibility of the author. The author thanks also the Eino Jutikkala Fund of the Finnish Academy of Science and Letters for funding this research.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Eino Jutikkala Fund of the Finnish Academy of Science and Letters.
