Abstract
In this contribution, we provide insight into the complex interplay between criminal procedure law and data protection law when it comes to regulating police use of facial recognition technology. By analysing the Dutch ‘Police Deployment Framework for Facial Recognition Technology’, we show that data protection law and criminal procedure do not interact with each other to a sufficient degree in relation to facial recognition technology. We identify several barriers standing in the way of their cooperation, resulting in notable gaps in the system of checks and balances: (1) a different underlying mindset (maximum versus minimum use of data); (2) a different assessment of the required legal basis (proportionality versus strict necessity); and (3) an ineffective web of supervision. We suggest several ideas for filling these gaps and bridging the disconnection: following the approach in existing Dutch law used for the processing of ANPR and DNA data, encoding the less ‘muddy’ rules of data protection law into digital technology itself, and further research on the feasibility of effective supervision by the Dutch Data Protection Authority. Our contribution shows that in order to properly regulate facial recognition technology, scholars need to look beyond the edges of their own fields of law.
Keywords
Introduction
The use of facial recognition technology has become popular with police forces around the world, including Europe. Facial recognition is a catch-all term for several functions, most notably verification and identification. 1 Particularly the latter type of application – the identification of suspects or, more broadly, ‘persons of interest’ – has caught the attention of law enforcement. The technology is said to be more valuable operationally than ordinary Closed-Circuit Television (CCTV) surveillance, as it can identify individuals remotely (sometimes in real-time) and link them to other information gathered by the police. 2 At the same time, facial recognition technology used for identification has recently become one of the most hotly debated topics because of its capability to disrupt the balance of power between the governors and the governed. 3 It has even attracted a proposed partial ban in the draft EU Regulation on Artificial Intelligence (draft AI Act). 4 Because of the way the technology works when deployed for identification purposes in public space, it poses grave concerns for the fundamental rights to privacy and to protection of personal data and – particularly in cases of public protest – the freedoms of expression and assembly.
Facial recognition technology deploys sophisticated analytical techniques, including Artificial Intelligence (AI), to create mathematical representations (i.e., biometric templates) of ‘captured facial images’ and to then compare these with the representations of ‘reference facial images’. Where the reference images have been previously assembled in a general police database or a specifically composed watchlist (e.g., potential troublemakers at a public protest), 5 the captured images are extracted from CCTV and other cameras deployed in public spaces, or scraped from the internet. 6 However, in order for any reference face to be identified in a particular time and place, the facial images of everyone within the gaze of the video camera(s) at that time and place need to be captured and analysed. 7 Moreover, facial recognition technology may be deployed in order to identify tens or even hundreds of persons of interest on a watchlist at a particular place. The use of facial recognition technology in public space therefore potentially amounts to a particularly intrusive type of surveillance. 8 It is thus paramount that police use of facial recognition technology, particularly when used for identification in public spaces, is properly regulated in law. This applies to both real-time and retrospective use of the technology, 9 since the implications as described above are similar in both cases (much more so than current discussions, which focus mainly on real-time deployment, would have us think). 10 Yet, a clear and adequate regulation of police use of facial recognition technology for identification purposes is still missing in many European Union countries.
This is so despite the fact that several sources of law already regulate the matter, including the general human rights framework, specific rules on processing personal data, and rules of criminal procedure. Yet, the existing regulation of facial recognition is partial and piecemeal, forming a patchwork of norms rather than a consistent and comprehensive legal framework. 11 Consider the division of regulation between criminal procedure law and data protection law, the two cornerstones of regulating facial recognition technology by law enforcement. Criminal procedure law regulates the collection of (personal) data 12 and data protection law for law enforcement regulates the analysis and other subsequent use of the collected personal data. However, collection and analysis of personal data are a thoroughly intertwined process, posing an intrusion into privacy and other rights at both ends, thus requiring regulation which considers the two parts of the process together. After all, broad powers to analyse the collected data make the regulation of their collection all the more important; and strict regulation of data analysis could justify a broader power of data collection. 13 Therefore, where regulation is split between these two (sub)fields of law, criminal procedure and data protection law need to be able to seamlessly work together. However, when these two (sub)fields of law insufficiently cooperate with each other, a disconnected regulation of the technology might occur, resulting in notable gaps in the system of checks and balances.
Police use of facial recognition technology therefore poses specific legal and regulatory challenges, requiring scholars to explore the interplay between the two mentioned (sub)fields of law. In order to address this need, we explore the question how criminal procedure and data protection law interact – or do not interact – with each other in the case of regulating facial recognition technology in the Netherlands, and which regulatory consequences this (lack of) interaction brings about. We focus on the use of this technology for identification purposes in public space. Since criminal procedure law is a very national affair, a focus on a specific jurisdiction is necessary. A focus on the Netherlands, with experience in the use of facial recognition technology and a brand-new framework for its deployment in public space, allows us to examine the issues stemming from the interaction between the legal (sub)fields in a more concrete manner. While the focus of this contribution lies on facial recognition technology, the implications of our discussion and proposed solutions may apply more broadly, to the regulation of other digital investigatory techniques as well as to other (European) jurisdictions.
The structure of this paper is the following. We first examine police use of facial recognition for identification in public space and its legal regulation in the Netherlands, focusing on the Police Deployment Framework for Facial Recognition. We then examine the current state of interaction between criminal procedure law and data protection law in this Framework, identifying barriers that stand in the way of their cooperation and the resulting gaps in the system of checks and balances: (1) a different underlying mindset (maximum versus minimum use of data); (2) a different assessment of the required legal basis (proportionality versus strict necessity); and (3) a lack of effective supervision. We then offer suggestions for filling these gaps, including by taking inspiration from the Dutch regulation of Automated Number Plate Recognition and DNA data, and encoding the less ‘muddy’ norms of data protection law in the technology itself. We conclude that the regulation of facial recognition technology operates amidst rapid technological development as well as numerous intersecting areas of law, requiring scholars to look beyond the edges of their own fields of law.
The Dutch ‘Police deployment framework for facial recognition technology’
It remains somewhat unclear how the Dutch police use facial recognition technology, at least to legal scholars relying on publicly available sources. We do know that the so-called CATCH system (short for ‘Centrale Automatische TeChnologie voor Herkenning’, or Central Automated Recognition Technology) is being used to retrospectively compare captured faces with large police databases of front-facial images for identifying suspects. 14 The police are also experimenting with the use of the technology for broader purposes, such as security management during football matches. 15 The Dutch police and the Minister of Justice and Security have tried to reassure Parliament by saying that the technology – apart from CATCH – has not been ‘operationally’ deployed, meaning that it has only been used experimentally with faces of police employees. 16 But these claims are impossible to verify.
What is clear, however, is that a legal basis for facial recognition technology is missing in Dutch law, something which has also been acknowledged in the Framework itself. Currently, the only applicable rules can be found – if at all – in a legal basis for taking and storing frontal-face images in the Code of Criminal Procedure (Wetboek van Strafvordering), which applies to the collection of data, and the legal framework for personal data processing as found in the Police Data Act (Wet politiegegevens (Wpg); partially implementing the EU Law Enforcement Directive (LED)), which applies to the subsequent processing of data. However, the Code of Criminal Procedure does not include any specific provision for the deployment of facial recognition technology; none of the existing powers seem to apply either. Yet, considering the high potential of facial recognition technology for intrusion into fundamental rights, especially when used for identification, the European Court of Human Rights (ECtHR) has made clear that a specific legal basis with authorisation requirements and concrete procedures for examining, using and storing data is required. 17
Despite this lack of legal regulation, or perhaps because of it, the Dutch police plan to deploy the technology much more widely. We infer this from the fact that in March 2023, the police published a ‘Framework for deployment of facial recognition technology’ (‘Police Deployment Framework for Facial Recognition Technology’, hereafter: Framework). 18 This Framework has been developed by the police for the police and is meant for experimentation with facial recognition technology. As such, the Framework cannot be seen as providing a legal basis for the power. Instead, it is meant to provide guidance for assessing whether the deployment of facial recognition technology is permissible in a given situation. Any police unit that wants to deploy facial recognition technology must first conduct their own assessment based on the principles in the framework, and develop a plan on why and under what circumstances they want to use the technology. This plan is then reviewed by a special police review committee. Ultimately, the chief of police must give their approval. Deployment may then still be cancelled after consultation with the relevant public prosecutor and mayor. 19
At the heart of the Framework’s assessment are the following ten questions: 1) how serious are the (possible) facts for which facial recognition technology is being considered; 2) has a criminal offence already been committed; 3) do we know who is being sought; 4) what is the location of the technology; 5) what is the duration of its deployment; 6) what are the alternatives; 7) what is the quality of the images; 8) who owns the camera taking the images; 9) who else is watching; and 10) which facial recognition technology system is being used? 20 Questions 1 to 6 deal with the proportionality of the deployment and questions 7 to 10 with issues relating to reliability and integrity of the data and the technology.
The Framework furthermore states that, as a principle, facial recognition technology should be used in a way that is ‘as targeted as possible’. With this proportionality principle in mind, the answers to the ten questions are placed on a scale that is expressed in traffic-light terms, with shades of green, yellow, orange and red. In order to conclude that the use of facial recognition is indeed permitted, the predominant colour stemming from all of the answered questions needs to be as green as possible.
The Framework mentions only few examples and how these would score on the colour scale. One example that it does mention is ‘the building of an information position with the use of facial recognition technology’. With regard to Question 2 (‘Has a criminal offence already been committed?’), this example is classified as orange. 21 With regard to question 3 (‘Do we know who is being sought?’), the Framework explains that ‘‘a specific person suspected of wanting to carry out a previous threat against a prominent politician’ is an example of ‘targeted deployment’. If the threat comes for instance from ‘members of an outlaw motorbike gang’, the deployment is less targeted. 22 And the less targeted, the less green the final assessment of such use will be.
One could think of several additional examples. For instance, stemming from a recent real Dutch case, the technology might be used to investigate the murder of a criminal defense lawyer, in order to determine whether someone had previously scouted the area where the crime was committed. The police could examine the images from municipal cameras in order to see which persons (other than neighbours) appear more than (say) three times on images of the street where the crime took place. 23 The best ten images of the suspect(s) would then be selected and automatically compared with images on the watchlist. Such a case would probably score deeply in the green concerning the seriousness of the (already committed) offence and the lack of available alternatives, but in the orange or red concerning questions like ‘Do we know who is being sought?’ A very different example concerns deployment in a football stadium, where facial recognition technology would be used to identify (potential) troublemakers (e.g., for making racist or homophobic statements). 24 Compared to the murder case, this case will not as easily score green in terms of targeting or seriousness of – already committed – crimes, but might score high on the lack of alternatives.
The interplay between criminal procedure and data protection law in the Dutch regulation of facial recognition technology
Based on the above discussion, we can now examine the current state of interaction between criminal procedure law and data protection law in the Framework for facial recognition. We identify several barriers standing in the way of the cooperation between the two (sub)fields of law resulting in gaps in the system of checks and balances: (1) a different underlying mindset (maximum versus minimum use of data); (2) a different assessment of the required legal basis (proportionality versus strict necessity); and (3) a lack of effective supervision.
A different underlying mindset: maximum versus minimum use of data
When exploring the interplay between the two areas of law, the first thing to note is the almost complete absence in the Framework of data protection rules. While the Framework repeatedly stresses that biometric data (referring to the biometric templates used for facial recognition) have an ‘especially protected status’, it – mistakenly – refers only to the General Data Protection Regulation, without any mention of the LED or the Police Data Act. The ten questions that need to be considered in the assessment also do not refer to, for example, the source and time of the captured facial images (apart from Question 8 asking ‘Who is the owner of the camera taking the images?’). There is also no mention of a Data Protection Impact Assessment (DPIA), even though one must be conducted in cases where the data processing is likely to result in a high risk to individuals’ fundamental rights and freedoms. 25 Because facial recognition technology leads to systematic automated processing of biometric data, the European Data Protection Board considers that DPIAs should be a mandatory requirement for all types of police use of the technology. 26
In the absence of any concrete rules limiting the further use of facial images, we conclude that the Framework has been built on the idea that all facial images – collected at a certain point in time from various kinds of sources – can be processed for identification purposes via facial recognition technology. The Framework thus seems to be based on the presumption of ‘maximum use of data’, a presumption which ‘naturally’ follows from the perspective of criminal procedure law. After all, for the purpose of combatting crime any information that might bring the case a step further should – as a rule – be used, assuming that the collection took place in a lawful manner. 27 The scarce literature on data processing in criminal procedure describes data processing ‘not only [as] a power, but also a necessity and a duty’. 28
This approach seems to be in opposition to the approach of data protection law, which is founded on the principles of data minimisation and purpose limitation in order to ensure fair and lawful data processing. 29 Nevertheless, it should be noted that the goal of data protection law is not to prevent personal data processing. Instead, the goal is to ensure appropriate legal safeguards so as to prevent harm from unfair and disproportionate data processing. After all, EU data protection law has a dual function: to protect the fundamental rights of individuals, but also to facilitate the free flow of data within the EU (Article 1(2) LED).
From a criminal procedure law perspective it is nevertheless difficult to recognise the meaning as well as the importance of data protection rules in the law enforcement context. This might be particularly true in relation to the so-called ‘record-keeping’ 30 provisions of data protection law, where personal data are protected through a ‘technocratic’ system based on control of access, categorisation and compliance measures, oftentimes demanding rather precise documentation and reporting conditions. 31 A good example of such a provision is Article 6 LED (implemented in Article 6b Wpg), which introduces a qualified obligation (‘as far as possible’) to distinguish between different categories of data subjects relevant in the law enforcement context, such as accused persons, suspects, victims, witnesses or persons otherwise involved. The rationale of such categorisation is that data of different data subjects are processed in different ways (for different purposes) and retained for different periods of time, thus ensuring that criteria of necessity and proportionality are fulfilled. 32 Yet, how could this provision ever be translated into concrete and workable rules when, for the purpose of automated facial identification, footage of a municipal camera is analysed? When such processing is likely to affect a wide group of unknown persons, how can the police distinguish the various categories of data subjects up-front? In fact, might not a reasonable interpretation of the obligation be that the use of facial recognition actually contributes to meeting the requirement of differentiating between data subjects? For example, if a white middle-aged man is being sought, CCTV footage in which only women, children, or people of colour are recognised (data subjects that are not of interest) might be discarded or blurred (we further discuss this in relation to automated blurring of faces as a type of Data Protection by Design measure). In this way, the use of facial recognition would contribute to meeting the data protection principles of data minimisation and purpose limitation.
With this discussion we want to show that successful cooperation between criminal procedure and data protection law requires criminal law scholars and practitioners not only to be aware of the value of data protection law, but also to know how to translate this value of the abstract and ‘foreign’ rules like Article 6 LED into more concrete rules within a criminal justice context. This is not a simple balancing act to be performed, given the different mindsets and interests of the two areas of law, split between assumptions of maximum versus minimum use of data.
A different assessment of the required legal basis: proportionality versus strict necessity
The second identified gap resulting from the lack of interaction between the two areas of law relates to the legal basis required for the use of facial recognitions technology for identification. The Framework briefly addresses this issue from a criminal procedural perspective. According to the Framework, the deployment of facial recognition technology must be seen as a ‘more than minor intrusion’ into privacy because sensitive (biometric) data are processed. For such a privacy intrusion, the legality principle in criminal law demands a specific legal basis. Interestingly, the Dutch police acknowledge that such a basis is needed and that it does not exist. They do not, however, draw the conclusion that facial recognition technology therefore cannot be deployed. 33 Quite the contrary, the conclusion seems to be that as long as its use is proportionate (that is, on the greenish side of the scale), it is allowed.
It is noteworthy that the issue of the legal basis is not considered from the perspective of data protection law in the Framework. Yet, precisely in the case of biometric data, a sensitive type of personal data, EU data protection law is of particular importance, as it requires a legal basis subject to particularly strict requirements.
When employed for the purpose of identification, facial recognition technology functions on the basis of processing biometric data. Because of this, it falls under the scope of the legal framework for the processing of special categories of personal data (Article 10 LED; 34 Article 5 Wpg). 35 Such processing first of all requires a legal basis, which needs to be phrased in sufficiently precise and foreseeable terms so as to offer an adequate indication of the circumstances and conditions under which the police may employ such measures. The law should specify at least ‘the objectives, the personal data to be processed, the purposes of the processing, procedures for preserving the integrity and confidentiality of personal data and procedures for its destruction.’ 36 Precision might also require the law to include preconditions relating to specific types of evidence or a requirement of judicial or internal authorisation. 37 This means that according to data protection law, the legal basis for the use of facial recognition technology needs to be specifically tailored at least to the processing of biometric data. 38 In fact, before national lawmakers create a new legal basis for any form of processing of biometric data (which poses a high risk for fundamental rights), they are required to consult the national data protection authority. 39
Furthermore, according to data protection law the processing of biometric data is only allowed when strictly necessary. This means that it needs to be used only as a last resort, that is, when it is indispensable, and when no other, less intrusive means are available. 40 This should not be an abstract and general assessment, but one which establishes objective criteria for the definition of the circumstances and conditions under which the processing can take place. 41 The requirement of strict necessity should therefore not be equated with the general principle of proportionality as found in the Framework. Rather than being one criterium among others, strict necessity places greater emphasis on the requirement that no other, less intrusive means are available (Question 6 of the Framework). According to the ‘Guidelines on the use of facial recognition technology in the area of law enforcement’ of the European Data Protection Board (EDPB), the margin of appreciation permitted to law enforcement relating to this strict necessity test is restricted ‘to an absolute minimum.’ 42
Data protection law therefore sets a (much) higher set of standards for the legal basis concerning police use of facial recognition technology for identification than Dutch criminal procedure law. This strictness is also reflected in the draft AI Act. The Act is primarily aimed at standardising the market for AI systems by imposing requirements on the quality of those systems. As such, it also sets restrictions on the use of AI systems, in turn affecting the possibilities for the police to deploy AI (further discussed in the section on creating an adequate legal basis).
According to the strict requirements of data protection law for the processing of biometric data, police use of facial recognition is therefore not allowed until a specific legal basis is created. While a specific legal basis is also required by the legality principle in criminal procedure law, the police simply set this problem aside and conclude that its use is acceptable as long as it is proportional. This might stem from the underlying assumption of maximum use of already collected data, as we have argued in the previous section. However, re-use of this sensitive type of data for a new investigative purpose creates new intrusions into fundamental rights, requiring a new assessment. Police use of facial recognition technology should therefore be carried out ‘in strict compliance’ with data protection law and its requirement of strict necessity, as the EDPB concluded. 43
An ineffective web of supervision: criminal courts versus the data protection authority
It is clear from the design of the Framework that the supervision of police use of facial recognition technology (FRT) is seen as a task for the police themselves. An application for the deployment of this technology is reviewed by an internal ‘police review committee’ and the final permission is granted by the chief of police. The evaluation of the deployment of the technology and subsequent decisions take place within – what the police call – ‘the FRT-Community’ of the police. 44 As a result, the police review committee or the chief of police are able to decide prior to any deployment that facial recognition technology may not be used in a particular case. On the basis of an ex post evaluation, the police ‘FRT-Community’ may also decide that a particular deployment may not be repeated (at least not in the same manner).
Despite the internal nature of this supervision, the Framework does acknowledge the importance of accountability: ‘Given the sensitivity of the data and the potentially high intrusion associated with the deployment of FRT, the police keep accurate records of where, how and why FRT has been deployed. In doing so, the police are obliged to be able to provide access to (the accuracy of) the system, information about the actual deployment and the data gathered. How a match was made must be traceable, explainable and verifiable. All of this also applies to the use of FRT that does not lead to evidence in criminal cases’.
45
Such a system of supervision begs an important question: does a self-imposed task to be accountable mean that the police can and indeed will be held accountable in practice? There are several reasons why the answer to this question will be ‘barely’.
The first reason relates to the limited supervisory role of Dutch criminal courts in this context. This is due to the fact that criminal courts are only able to judge the deployment of facial recognition technology within the context of a criminal case against a concrete defendant. Yet, relatively few cases of facial recognition technology result in the prosecution of a suspect. If facial recognition technology is used for preventive purposes or to stop a crime, there will be no prosecution. Even when used for a criminal investigation, the case oftentimes does not result in a criminal trial, for instance, because of lack of evidence, lack of priority, or an extrajudicial settlement. Furthermore, information concerning an operation involving facial recognition technology does not always end up in a case file, particularly when it is not used for an arrest but just as a lead in the earlier stages of an investigation.
46
Especially in cases of organised crime, the police might not want to reveal too much information about investigative tactics. Finally, Dutch criminal procedure law requires courts to consider only such unlawful processing of personal data that has ‘had a decisive influence on the course of the investigation into and/or the (further) prosecution of the suspect’.
47
In practice, a decisive influence will be difficult, if not impossible, to substantiate. As Hoving writes: ‘The provisions in the Wpg, the Wjsg [Judicial and Investigative Data Act] and the [Law Enforcement] Directive are almost all broadly worded and largely have the character of principles. Investigating officers and prosecutors have wide discretion to assess on a case-by-case basis what processing of criminal personal data is proportionate and necessary. Only if it is evident that criminal records have been improperly handled could a procedural violation be established. An example would be leaving a case file on the train.’
48
In fact, Dutch courts have been systematically rejecting the need to assess whether a violation of data protection law has taken place, considering data protection law an ‘unimportant regulation of criminal procedure’. 49 In other words, judges in criminal trials do not consider the enforcement of data-protection legislation as a part of their task. In view of the very limited role of criminal courts, the main supervisory task concerning the processing of personal data in the context of law enforcement therefore falls on the Dutch Data Protection Authority (Autoriteit Persoonsgegevens; DPA), the other supervisory body for the processing of personal data in the criminal law context. However, looking at the Framework, several factors seem to stand in the way of the DPA effectively performing this task.
One major obstacle is the secrecy that is inherent to criminal investigations. In the Framework, the police write: ‘[D]ue to investigative interests and the necessary protection of investigative methods, the police cannot be completely transparent to everyone about the exact details of FRT deployment’. 50 The question that arises here is: may the Data Protection Authority (in some situations) be considered as ‘everyone’? While the LED offers no possibilities to limit the information demanded by the national DPA, the situation in the Dutch implementation of the law is less clear, at least to us. According to Article 33(5) of the Police Data Act, further rules concerning the audits performed by the DPA are determined in administrative law. However, it remains unclear to what extent these rules of administrative law – which in any case seem to require an audit at a fairly abstract level – allow for certain information or details on the use of facial recognition to be withheld from the DPA. 51 It does not seem unlikely that the DPA will either be unaware of the use of the technology or will receive insufficient information concerning its deployment. And because data subjects generally remain unaware that their facial images have been processed for law enforcement purposes, they will also not be able to file a complaint to the DPA.
A further problem lies in the nature of the review the authority has to perform. The DPA will have to test the question of lawful use of facial recognition technology not only on the basis of relatively concrete record-keeping rules, but also on the basis of open standards such as ‘strict necessity’. Such a test requires a balancing of interests, which in turn requires knowledge of the criminal investigative context. 52 By this we mean not only the details of a concrete case, but also the knowledge of all the interests at stake. This means that, besides data protection rights, the balancing act needs to at least include the interest of the criminal investigation in the specific case, the availability of (alternative) investigation methods, and insight into all persons whose right to privacy is being intruded upon and to what extent. In order to properly weigh these interests, knowledge of the relevant criminal procedural rules and their use is therefore required. At the very least, this implies that the Data Protection Authority will need to employ people who have both knowledge of and experience with data protection law and criminal procedure. The question is whether the Dutch DPA is making an effort to hire such people. This is not clear from its latest ‘vision document’ or annual report. 53 It is generally known however, that despite their continued growth in recent years, the DPA is encountering significant staff shortage, making it impossible to look into all potential data protection violations. 54
Taking data protection in the law enforcement context seriously: suggestions to improve the interaction between the two areas of law
What can be concluded on the basis of the above analysis with regard to the use of facial recognition technology is that data protection law and criminal procedure do not interact with each other to a sufficient degree. The interaction seems to be stalled at the very start, with underlying mindsets concerning the use of collected data that – at least at first glance – seem to directly oppose each other. As a result, there is a lack of appreciation and understanding of data protection law by criminal law practitioners, resulting in a normative framework that is both incomplete and inconsistent: a non-existent legal basis, essentially leading to a complete lack of restrictions concerning the situations in which collected facial images may be used, and a supervision framework that is very much unclear and insufficient.
So, how could this situation be improved? A prerequisite is, of course, that criminal lawyers come to value data protection law as indispensable for the regulation of criminal investigations in times of digitisation. Assuming this change of perspective will take place, we have come up with three proposals in this Section to address the lack of interaction between criminal procedure and data protection law. First, we propose that the missing legal basis for deploying facial recognition technology should take inspiration from the Dutch legal framework for processing Automated Number Plate Recognition data and DNA data. With this suggestion, we also show that criminal procedure law already includes numerous rules that fit or are aligned with the general perspective and approach of data protection law. Second, we propose that some of the more straightforward record-keeping data protection rules should be encoded in the (design of the) technology itself, including the IT systems of the police. Finally, we offer some considerations on designing a system of effective supervision.
Creating an adequate legal basis: taking inspiration from the regulation of ANPR and DNA data
In the absence of regulation for the use of facial recognition technology, the Dutch police have created a brand-new and experimental Deployment Framework. While such a Framework is a good way to open up the discussion on regulating this technology, the Framework itself cannot be a sufficient basis for an intrusive technique like facial recognition technology. Not only does EU data protection law require that national lawmakers take the lead, but also, a broad and open discussion is needed in order to determine what we as a society consider acceptable in the landscape of digital possibilities. 55 However, the difficulty for legislatures is that technology and its possibilities – and thus the answer to the question what we think is acceptable – are quickly evolving. This means that regulation of facial recognition technology has to navigate not only between the two (sub)fields of law – criminal procedural and data protection law – but also between the requirements of legality and the need for flexibility. In that respect, a legal basis should ideally be a combination of ‘outline level’ provisions in the Code of Criminal Procedure and a more specific elaboration in flexible lower regulations. 56
While national lawmakers do not face an easy task, they do not have to start from scratch, as it sometimes seems. Examples of rules combining criminal procedure law and data protection law already exist. To begin with, the recent regulation of Automated Number Plate Recognition (ANPR) in the Dutch Code of Criminal Procedure (CCP) and lower regulations is interesting and relevant because it involves the automatic processing of personal data by use of cameras in public space. This regulation was introduced after years of experimenting with ANPR and after several courts and the DPA made clear that a sufficient legal basis was missing. 57 Article 126jj CCP was introduced to specify for what purposes and under which conditions the police can use the captured data of passing-by vehicles. The provision stipulates that ANPR data are retained for four weeks and can only be accessed with an order by the public prosecutor in case of 1) suspicion of a crime for which pre-trial detention is allowed (generally, crimes punishable by a maximum of four-year or longer prison sentence), or 2) to arrest a fugitive suspect or convict. It is clear that this restriction of purposes is considerably more stringent than the processing of facial images for identification purposes under the Framework, which has no restrictions with regard to the situations in which the collected facial images may be used.
This is a remarkable difference, given that the automated processing of facial images potentially holds a significantly greater breach of the right to protection of personal data than the use of ANPR data. After all, the processing of facial images for the purpose of identification falls under the increased protection of Article 10 LED, which requires a ‘strict necessity’ test and thus a sharp delineation of purposes of further processing. The balancing of interests under this criterion means that the subsidiarity test in particular carries a great weight. 58 In other words: is it possible to achieve a goal in another, less intrusive way, that is, one which does not require the processing of faces of all individuals in a particular public place? We believe that, following the recent ANPR legislation in the Netherlands, rules on the use of facial recognition technology should strictly – in fact, more strictly than with ANPR – define the situations in which its use is allowed.
Furthermore, if the European Parliament’s version of the AI Act were to come into force, a legal basis for facial recognition technology would have to be considerably more limiting than the Framework. This draft text prohibits both real-time use of AI systems operating in publicly accessible places and ‘AI systems used for the analysis of recorded footage of publicly accessible spaces through “post” remote biometric identification systems, unless there is pre-judicial authorisation for use in the context of law enforcement, when strictly necessary for the targeted search connected to a specific serious criminal offense that already took place, and only subject to a pre-judicial authorisation’. 59
Lessons can also be learned from the lower regulations of Dutch ANPR legislation. These are rules issued by the Minister of Justice and Security concerning the further conditions for processing data. For example, in the ANPR Decree there are provisions on drawing up a camera plan, 60 provisions on access to and security of the data, 61 and a provision about an annual privacy audit. 62 These are essentially data protection rules, which have been integrated into the criminal procedural legal framework. We argue that similar rules can and should be drawn up for the use of facial recognition technology.
The regulation of facial recognition technology could also learn from the Dutch legal framework concerning DNA testing. This is particularly important in regard to the reliability of data generated through sophisticated algorithmic techniques. It is well known by now that facial recognition technology struggles with bias 63 and that, despite its appearance of objectivity, its output remains of a probablistic nature. Whether the technology will indicate a match between two faces depends on the threshold value (determining the value above which biometric profiles are considered to match), which can be set not only by the manufacturers of the technology but end users as well (e.g., police officers). 64 Fixing this threshold too high or too low risks high false negative or false positive rates, respectively. Concerning the issue of bias, the upcoming AI Act will include provisions on the quality and representativeness of the data used to build the models, which should help improve the realiability of the model. Nevertheless, in the procedural context there is still a need to test the means by which identification took place, if the right to a fair trial in Article 6 ECHR is to be upheld. The manner in which this could take place could follow the framework as found in the Dutch ‘Decree on DNA investigation in criminal cases’. 65 This framework makes clear that DNA testing is considered expert evidence, thus giving the defence the right to counter-expertise, and that a designated institute in the Netherlands is responsible for the quality of that testing. Obviously, facial recognition technology is a different type of analysis (data-based) than DNA analysis, which is chemical-based. Nevertheless, there is still a need for all parties in the criminal trial to have the opportunity to effectively test the reliability of evidence resulting from AI analysis (including information on disregarded ‘matches’ and error rates, as well as uncertainties of the system itself), a condition that is particularly difficult to achieve when it comes to police expertise, which usually remains undisclosed. Considering that algorithmic research already partly resides at the Netherlands Forensic Institute, 66 this would be a relatively straight-forward solution.
DNA regulation is of interest to the standardisation of facial recognition technology for additional reasons. Like facial images, DNA data are sensitive personal data within the meaning of Article 10 LED, thus falling under the strict regime we described before. According to the Dutch CCP, obtaining DNA material – on the basis of which DNA profiles are made and processed – can take place only by order of the public prosecutor 67 for the purpose of an ongoing investigation, if someone is suspected of a crime of a certain gravity, and furthermore, for the purpose of future investigations, after the conviction for such a crime. The retention periods of DNA profiles are then regulated in Articles 13 to 18c of the DNA investigation Decree and vary between ‘immediate destruction’, 12 years and 80 years. These periodes depend on the type and severity of the offence, the final verdict in a given case (acquittal, conviction, level of sentence), and whether the suspect is a minor. DNA regulation is therefore very precise and nuanced. And while we are not suggesting that facial recognition regulation needs to be just as precise, we do think that in the context of the ‘strict necessity’ test and the qualified requirement to establish different categories of data subjects, rules relating to DNA data processing can serve as an example.
Encoding the less ‘muddy’ norms of data protection law: taking inspiration from the Data Protection by Design-measures in ANPR regulation
Our second proposal relates to one of the common criticisms of data protection law: that it is principle-based regulation and, therefore, abstract, unclear and difficult to put into practice. 68 However, alongside open-ended (‘muddy’) norms, data protection law also includes a range of relatively concrete procedural and technical rules, which require a limited level of interpretation and balancing. This includes not only requirements relating to automatic logging 69 and the keeping of records, but also the implementation of additional technical measures integrating safeguards into the processing itself (e.g, labelling of the collected personal data) and other security measures for the protection of data from attacks, leaks and destruction. These requirements are of particular importance for the processing of special categories of personal data, such as biometric data. While some of these requirements, such as the requirement of secure data processing in Article 29 LED, still include open-ended notions such as ‘state of the art’ and ‘context’, the recent development of guidelines, handbooks and technical standards on the security of personal data processing by various national and international bodies, nevertheless offers a range of concrete measures for implementation. 70 These procedural and technical rules are thus well-suited to being encoded (that is, embedded) into the technology itself. 71 Although some scholars have argued that only the less ‘interesting’ rules of data protection law render themselves to encoding, 72 embedding the less ‘muddy’ rules of data protection law into the technology can still enhance the ex ante application of these legal norms, and therefore the principle of accountability.
With the introduction of the concept of data protection by design and by default into data protection law (Art. 20 LED; Article 4b Wpg), the police are already obligated to ensure that the processing of personal data meets the Directive’s requirements through the implementation of adequate solutions into the (design of the) technology itself, along with organisational measures. 73 While this obligation is not unlimited, the LED makes clear that the implementation of data protection by design and by default ‘should not depend solely on economic considerations’. 74 The measures that need to be implemented, will be largely determined by the data protection impact assessment (DPIA) 75 that needs to take place before facial recognition technology is used. This underscores the importance of conducting a DPIA. In relation to facial recognition technology, particular attention should be paid to measures that need to be implemented in the various IT systems of the police, where the captured facial images are stored. After all, there is no automated identification based on facial recognition technology without databases of ‘known faces’. Technical and procedural measures in relation to these databases are therefore needed to counteract the underlying idea that all facial images collected at a certain point in time and from various kinds of sources can be processed for facial recognition purposes.
Dutch ANPR regulation can again serve as inspiration, as it includes rules on the blurring of persons’ faces that are captured in ANPR images (Article 126jj CCP). While this was initially done manually, it has recently become possible to blur the faces in an automated manner, 76 a prime example of data protection by design. This technical measure helps to implement the principles of data minimisation and purpose limitation (and, ultimately, fair processing), according to which data not needed for the concrete task at hand should not be collected in the first place or should be deleted or rendered unusable immediately after collection. This blurring of faces might be a particularly useful measure in the case of facial recognition technology deployed in public space, where a wide group of unknown and innocent persons are likely to be caught on the camera footage. As soon as the automated processing would identify the relevant subject (or determine that they do not appear on the captured images), the faces of the remaining persons on the images should be blurred. The technical measure of blurring should be further assisted by organisational measures, the other key aspect of data protection by design. Accordingly, ANPR regulation includes relevant procedural rules on task separation: before the tactical staff of the police can access the images captured by ANPR cameras, the technical staff needs to take a first look in order to ensure compliance with the legal requirements. 77 Such task separation further ensures the achievement of legal safeguards, such as preventing that tactical staff of the police gain knowledge of data that should not be used (e.g., because they should have already been deleted).
While encoding certain less muddy data protection rules into technology itself can only serve as a limited and imperfect solution, it can nevertheless aid the achievement of data protection law in practice, as well as the accountability and supervision of compliance with the law, matters which have proven to be particularly problematic in the context of law enforcement.
Designing effective supervision for processing personal data in the law enforcement context
Our final proposal relates to the issue of effective supervision for processing personal data in the law enforcement context. It is paramount that independent supervision is able to function properly, given the peculiarities of the use of facial recognition technology for identification within the law enforcement context. In the section on supervision, we concluded that the main task of the independent supervision of the deployment of facial recognition technology by the police falls on the shoulders of the national Data Protection Authority. We also questioned the effectiveness of supervision by the DPA, given the ‘inscrutability’ regarding its information position and knowledge of the criminal context.
These findings raise the question of how such supervision can be effectively organised. Because of secrecy around police tactics, operations and strategy, supervision needs to go beyond reactive, complaints-based approaches that assume sufficient awareness among members of the public. 78 Ideally, this supervision should be vested in an authority that can both systematically monitor and stop unlawful data processing in individual cases. This authority should also contribute to the refinement of data protection standards in the law enforcement area, and ensure the legal protection of individuals. 79 As such, this authority needs to have the appropriate expertise and be informed as fully as possible on the processing activities taking place by the police. For this, it needs to have powers for ‘systematic supervision’, that is, the knowledge of and power to remedy the unlawful processing (also) beyond individual cases.
The obvious question that emerges is whether such systematic supervision can be organised within the existing Dutch DPA or whether a new supervisory authority should be created. Creating a new authority will likely lead to even higher costs, take even longer to put into place and further complicate the supervision process (yet another player in the game), than trying to organise such supervision within the Data Protection Authority. 80 With sufficient funding and (therefore) experts, the required expertise could in principle be organized within the existing DPA. We do not, however, have sufficient understanding of the extent to which the Dutch DPA has the actual powers that are needed for systematic monitoring of data processing. For this we would need better knowledge of yet another field of law: administrative law. It is, for example, unclear to us, if the obligation to ‘repair the situation under an administrative order’ (Article 35c sub d Wpg) includes the power to stop unlawful processing. Under the LED, such a power is not required.
What we also do not grasp fully, how the decisions and actions of the DPA relate to the need for secrecy of the police. The information position of the DPA could namely be seriously hindered by this need for secrecy. We acknowledge that this is not an easy issue to solve. For this reason, we also understand Federova and others’ suggestion to create a new supervisory authority based on the design of the supervision of Dutch intelligence agencies. 81 After all, in the context of intelligence, secrecy is the starting point. Furthermore, in the Netherlands there is experience with a combination of ex durante and ex post supervision of intelligence agencies that could serve as an example. Nevertheless, before creating something new, we should first conduct a study on the possibilities of what is already there. This will entail answering questions, such as: how effective can supervision of the Dutch DPA become, given the potential obstacles for supervision within the law enforcement context, and given the need to combine several (sub)fields of law (not only data protection and criminal procedural law, but also administrative law and, in due time, the new regime of the AI Act)?
Conclusion
In this contribution, we have tried to provide insight into the complex interplay between criminal procedure law and data protection law when it comes to the regulation of police use of facial recognition technology. We have done so by analysing the Dutch ‘Police Deployment Framework for Facial Recognition Technology’. This Framework makes clear that there is an almost complete absence of interaction between the two areas of law, leading to notable gaps in the regulation of this intrusive technology. We identified three main barriers in the interaction between criminal procedure and data protection law that stand in the way of adequate regulation of facial recognition technology: 1) a different mindset underpinning the two areas of law (maximum versus minimum use of data); 2) a different assessment of the required legal basis (proportionality versus strict necessity); and 3) supervision that not only suffers from practical obstacles, but also has to operate at the intersection of very different (sub)fields of law and is, therefore, ineffective.
We have suggested various ideas for filling these gaps, including by looking at the regulation of automated processing of personal data in existing Dutch law concerning ANPR and DNA data. With these suggestions, we also show that criminal procedure law already includes numerous rules that fit or are aligned with the general perspective and approach of data protection law. We have also recommended encoding the less ‘muddy’ rules of data protection law into digital technology itself (particularly in relation to the databases in which facial images are stored), and called for further research on the feasibility of enhancing effective supervision by the Dutch Data Protection Authority, rather than creating yet a new supervisory authority.
Despite all our ideas and suggestions, we realise that there are no clear-cut and easy solutions available. After all, regulation of police use of facial recognition technology operates amidst rapid technological developments and numerous intersecting (sub)fields of law: existing data protection law (the original European rules and the – sometimes diverging – implementation in national laws), the upcoming AI Act, criminal procedure law, and administrative law. However, continuing to wait for EU rules to crystallise and for practice to further develop, as seems to be the main strategy of the Dutch lawmaker, 82 is not an option. It is an illusion to think that law ever fully crystallises. Moreover, the use of facial recognition technology, especially in public spaces, poses such grave concerns for the fundamental rights to privacy, the protection of personal data and even freedom of expression, that the public debate on its regulation needs to start now. This also requires scholars to look beyond the edges of their own fields of law. We have taken a first small step with that; we hope it will turn out to be a constructive one that paves the way for additional steps.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
