Abstract
Earlier this year, the European Commission (EC) registered the ‘Civil society initiative for a ban on biometric mass surveillance practices’, a European Citizens’ Initiative. Citizens are thus given the opportunity to authorize the EC to suggest the adoption of legislative instruments to permanently ban biometric mass surveillance practices. This contribution finds the above initiative particularly promising, as part of a new development of bans in the European Union (EU). It analyses the EU’s approach to facial, visual and biometric surveillance,
Introduction: Europe wants to say ‘no’ but does not know how
On 7 January 2021, the EC registered the European Citizens’ Initiative (ECI) under the title ‘Civil society initiative for a ban on biometric mass surveillance practices’ (European Commission, 2021a).4
Introduced by the Lisbon Treaty, the ECI is an ‘agenda-setting’ instrument for citizens. It is registered as admissible by the EC, when it does not manifestly fall outside the EC’s authority, it is not ‘manifestly abusive, frivolous or vexatious’ and it is ‘not manifestly contrary to the values’ of the EU. After registration, one million citizens from at least one quarter of Member States can call the EC to propose legislation in relevant areas. In this case, the ‘Civil society initiative for a ban on biometric mass surveillance practices’ was registered as admissible.
Biometric and visual surveillance technologies have been taking a greater place in everyday life. After initial implementations in workplaces (Maple & Norrington, 2006), schools (Hope, 2015) or nightclubs (Zhang, 2002, Chapter 2, p. 30), today’s biometrics are embedded within numerous devices, ranging from personal smartphones and laptops to police-worn cameras and other tools used by public actors. Contemporary technologies became ubiquitous and allow for new trends, such as state surveillance, authentication for physical and virtual access, as well as online or face-to-face payment (Ogbanufe & Kim, 2018; Setsaas, 2019). Biometrics play a key role in the evolution of video surveillance systems towards intelligent video analysis. This development towards biometric enhanced video surveillance (such as facial recognition systems) combined with algorithms raises particular concerns about stigmatization and discrimination, as well as risks to privacy and the protection of personal data, because of the very special nature of biometrics (EDPB, 2020).
Such risks can be materialised in various areas and situations. To name but a few: when used by the police, surveillance technologies can result in discriminatory policing, whose biases can outweigh any potential benefits (Berk, 2019); when deployed by Internet giants, such technologies may lead to deceptive practices or monitoring experiments (Van Eijk et al., 2017; O’Hara, 2015); when applied to schools, surveillance systems can watch the children’s behaviour (Hope, 2015). In the era of ‘just-in-case, watch everything’ and ‘more data is better’ (Walsh et al., 2017), when even intelligent dolls may be used for monitoring (Keymolen & Van Der Hof, 2019), people are becoming particularly vulnerable.
While the engagement in surveillance practices is, in general, an old subject of legal, political, sociological or philosophical discussions (Bennett, 2008; Shroff & Fordham, 2010; Webster, 2012; Bannister, 2005; Musik, 2011; Brayne, 2020) at both EU (Wright & Kreissl, 2015) and state level (Clavell et al., 2012; Fussey, 2012; Clavell, 2011; Bjorklund, 2011; Fonio, 2011; Svenonius, 2012; Heilmann, 2011), the situation becomes acute, in an era when the biometrics-market is growing fast pushed by the covid-pandemic (AlgorithmWatch & Bertelsmann Stiftung, 2020, p. 14).5
This market-expansion might surprise, given the growing rate of false positives/negatives resulting from mask-wearing measures; but it also does not surprise: biometric-related technologies, aimed at preventing the spread of the virus, have been broadly implemented around the globe with few (if any) meaningful checks and balances.
This contribution analyses the EU regulatory approach to facial, visual and biometric surveillance. More precisely, it addresses the EU legal regime, as framed under the General Data Protection Regulation (the ‘GDPR’; European Parliament & Council, 2016a) and the Data Protection Directive for Law Enforcement and Criminal Justice Authorities (the ‘LED’ or ‘Law Enforcement Directive’; European Parliament & Council, 2016b).6
Now and then, we will also refer to the data protection rules of the Council of Europe as contained in Convention 108.
More concretely, our analysis of the GDPR and the Law Enforcement Directive will show that their applicability is loaded with vagueness. On top of vagueness, we find state friendly exceptions and rules and little or no substantive hurdles to address concrete surveillance technologies. To defend the claim that substantive hurdles to surveillance are needed, we will highlight a series of elements (to us, absent in the EU legal framework) that could be considered by the EC when strictly regulating mass surveillance practices: concreteness and precision, bright-line bans and practical remedies.
Although there is literature on the need for bans in the EU, there are few (if any) studies on the way in which concrete prohibitions could be imposed. Our paper contributes to thus far debates by submitting a new paradigm for introducing desirable bans:
first, the EU privacy/data protection framework, currently permissive, could become honestly prohibitive regarding technologies whose use may be particularly harmful or dangerous; second, certain techniques, like moratoria prohibiting certain technological uses until proven safe, could be adopted at EU level; third, the EU’s approach to data protection, currently highly process- and procedure-based, could combine different elements known from other fields (like criminal law or the market sector) that can make applicable rules more comprehensively applied.
The discussion proceeds as follows. Section 2 analyses the EU’s applicable data protection-instruments: the GDPR and the Law Enforcement Directive. Its goal is to stress uncertainty, stemming from the EU’s abstract approach: that is, a permissive system not sufficiently regulating technological implementations. To highlight the EU’s inefficiencies, we refer to concrete examples: the processing of sensitive data; and the use of drones and dash cams. Thereafter, Section 3 aims to detect vagueness in particular relation to biometric data: the processing is, here, ‘prohibited’; albeit, a long list of exceptions makes the prohibition quite illusionary. Given these insufficiencies, Section 4 suggests a dynamic interpretation of Article 9(4) of the GDPR as an open call toward domestic regulators to introduce limitations on biometric processing. Then, Section 5 critically addresses the LED as, first, a police-friendly legal instrument and, second, an unjustifiably permissive tool regarding the processing of sensitive data, including biometrics. Moreover, two cases are discussed to stress the ubiquitous application of EU data protection laws (Section 6 on Clearview), as well as the broad discretion enjoyed by law enforcement in deploying technologies to process biometrics (Section 7 on Bridges). Furthermore, Section 8 analyses the EU framework in more detail to make the argument for the need for concrete rules and bans, currently absent in Europe. This need is more analytically tackled in Section 9 that recommends: concreteness, precision, bright-line bans and effective remedies. Last, Section 10 draws conclusions and makes some remarks, in light of latest trends in the EU.
The GDPR was introduced together in the European Union with the Law Enforcement Directive (LED) to reform the legal regime established by the 1995 Data Protection Directive (European Parliament & Council, 1995) and the 2008 Framework Decision respectively (Council of the European Union, 2008).
The person whose data are processed (‘the data subject’) has rights towards those that collect and process their data (‘the controllers’),7
Under the EU legal framework, the citizen (‘data subject’) enjoys several individual rights: to information (GDPR, art. 13–14; LED, art. 13); to access her data (GDPR, art. 15; LED, art. 14–15); to rectification (regarding inaccurate data; GDPR, art. 16; LED, art. 16); to erasure of her data (GDPR, art. 17; LED, art. 16); to restrict processing (GDPR, art. 18; LED, art. 16); to data portability (GDPR, art. 20); to object to processing (GDPR, art. 21); or not to be subjected to automated decision-making. The right of the data subjects not to be subjected to automated decision-making is treated as a general prohibition; a duty of the controller who must abstain from subjecting individuals to decisions that are based solely on automated processing and that can significantly affect them. GDPR, art. 22.
Key notions in both the GDPR and the LED are ‘personal data’ and ‘processing’. Pursuing the objective of technological neutrality, these texts do not pay any specific attention to individual technologies (like biometrics, drones, RFID and so forth), but define rules and principles applicable to whatever activity (via technology) that processes personal data. So, technologies fall under data protection law when they process personal data. In the area of surveillance, this is often the case. The concept of processing is therefore very powerful in the sense that it applies easily. The same goes for the term personal data. Whenever data identify or can identify individuals, they are personal data. This could be just about everything (from written text, numbers, sounds, odours to DNA). This equally applies to the collection of visual data: both the GDPR and the LED apply to the taking of images and the use of video surveillance systems, because filming is equal to ‘processing personal data’, granted that persons are filmed.8
However, the Regulation does not apply to processing of data that has no reference to a person, e.g. if an individual cannot be identified, directly or indirectly (EDPB, Guidelines 3/2019 on processing of personal data through video devices, 7). The EDBP Guidelines give three examples. Example 1: The GDPR is not applicable for fake cameras (i.e. any camera that is not functioning as a camera and thereby is not processing any personal data). However, in some Member States it might be subject to other legislation. Example 2: Recordings from a high altitude only fall under the scope of the GDPR if under the circumstances the data processed can be related to a specific person. Example 3: A video camera is integrated in a car for providing parking assistance. If the camera is constructed or adjusted in such a way that it does not collect any information relating to a natural person (such as licence plates or information which could identify passers-by) the GDPR does not apply.
There is not much really prohibited in the GDPR and the LED. As a rule, all activities are allowed when those setting up the technologies (‘data controllers’) abide to the general rules and principles (like lawfulness, necessity, proportionality and data minimization).9
Those that process citizens’ data (‘data controllers’) must comply with general data protection principles, namely lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity/confidentiality and accountability (GDPR, art. 5). Several duties, including the obligation to conduct impact assessments, are imposed to safeguard compliance with these principles. For a more detailed analysis, see European Union Agency for Fundamental Rights and Council of Europe (2018, pp. 139–186).
Art. 9(1) GDPR in principle prohibits the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation. See also Article 10 of the LED.
If one looks closer at the text of the GDPR, there are indirect hints to certain technologies and also to technologies that help filming and picture-taking. Article 35(3)(c), for instance, requires the carrying out of a data protection impact assessment in case of systematic monitoring of publicly accessible areas on a large scale; and Article 37(1)(b) requires processors to designate a data protection officer, if the processing operation by its nature entails regular and systematic monitoring of data subjects.
Overall, the treatment of technologies capturing images in EU data protection law is sloppy, unsystematic and kept vague to avoid legal controversies. The soft law guidance offered by data protection authorities in Member States and by the European Data Protection Board (EDBP) is hardly any better. The result is a permissive system of rules and principles that in practice hardly impedes the out roll of these visual surveillance technologies. Two examples could support this argument:
Example 1: The GDPR and the LED do not pay specific attention to the processing of visual data (contrary to their treatment of ‘biometric data’ (see below)). Apart from the more principled questions (‘is CCTV not serious enough a threat to privacy to consider it more in detail?’), this raises all sorts of legal/technical questions, for instance, should visual data of persons be regarded as sensitive data because they always reveal something about skin colour and very often something about race or origins? Accepting that all pictures of persons reveal data about their skin colour, origins or ethnicity would put a serious filter of prohibition on the use of cameras and video surveillance technologies; albeit, this deduction is avoided by all legal (interpretative) authorities in Europe that seem to hesitate to initiate a discussion on people’s growing addiction to the use and collection of visual data. At best when the issue is addressed, like in the guidelines of the EDPB, one may find rather weak arguments about the application of the household exception (processing data for strictly private purposes is exempt from the GDPR) and about intentionally or not capturing images to collect data on ethnicity.11 Compare the ‘howevers’ in EDPB (2019, p. 17). The Council of Europe (see further in the text) does not do better. In the recitals of the GDPR, there is another, probably not sufficiently substantiated, argument that makes use of the sophisticated definition of biometric data (that will be discussed below). In a nutshell, the base line argument is that a regular photograph that is taken does not amount to this sophisticated definition of processing of biometric data and is, hence, not ‘sensitive data’ (GDPR, recital 51); this argument is echoed by the Council of Europe in its Explanatory Report to the Convention 108
The regulatory attention to biometrics in European data protection law fares better. By their nature, biometrics allow to identify persons, so there appears to be no vagueness here: data protection law applies. However, complexities arise when trying to comprehend how European data protection law applies more concretely to biometrics. Needed are ‘personal data’ and ‘processing activities’. Both the GDPR and the LED define biometric data and explain when these data are ‘personal data’ in the sense of data protection law: ‘personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data’ (GDPR, art. 4(14); LED, art. 3(13)).12
The term ‘dactyloscopic’ data refers to fingerprint data. See ICO (n.d.).
Therefore, facial data are explicitly mentioned in data protection law, but only in the context of biometric data.
It follows from this definition that the GDPR and the LED cover biometric data types, like facial images or fingerprints, when the processing of these data meets three cumulative pre-requisites (Kindt, 2018, pp. 529–531). First, this processing activity must be technology-based (‘specific technical’; EDPB, 2019, paragraph 75; GDPR, recital 51; Jasserand-Breeman, 2019, pp. 66–68; Kindt, 2018, pp. 529–531; Jasserand, 2016, p. 303; Blair & Campbell, 2018). Second, the data must reveal (‘relate to’) physical, physiological or behavioral traits.13
Faceprints and similar characteristics can qualify as physical or physiological traits; whereas the voice, the signature or the way one uses her keyboard, walks or sleeps can be classified as behavioural traits. See Kindt (2013, pp. 23–32). Thus, identification techniques may vary from face recognition, fingerprint verification or iris scanning (physical/physiological traits) to the processing of keystrokes or signatures (behavioural traits). See ICO (n.d.).
The GDPR adds to this Article 4 GDPR-definition a somewhat strange sequel in Article 9. This provision on ‘special’ sensitive data, discussed in the preceding section,14
We briefly discussed above the data protection distinction between ordinary personal (as we would like to call it) and more risky, special personal data. Following a risk-based approach, art. 9 GDPR groups the latter and as a rule prohibits their processing. More concretely, the GDPR presumes that such a special type of data merits a special type of protection – because its processing appears more likely to violate human rights. See ICO (n.d.). With regard to the risk-based approach of the GDPR, see Lynskey (2015, pp. 8–9).
As the EDPB explains, if the goal is not to single out and uniquely identify a natural person, but to distinguish between groups, then data involved are not falling under the special category-provision of art. 9 GDPR. EDPB (2019, paragraph 79); Kindt (2018, p. 531).
With ‘unique identification’ central to both provisions, it can be argued that biometric data are almost always a special category of personal data. In this context, any processing operation aimed at, for example, authenticating identity or drawing inferences on natural persons can fall under the prohibition of article 9 GDPR (ICO, n.d.). However, a strict reading of articles 4 and 9 GDPR does not exclude that some biometric data are ‘ordinary’ personal data, not sensitive data. The UK data protection authority, the ICO, rightly mentions the possibility, without however giving examples. We quote the relevant ICO passage in full, because it is illustrative for the more general narrative of this contribution: it nicely identifies a broad range of biometrics and clarifies the distinction between the processing of pictures/photos and the processing of biometric data:
In its Guidelines, the EDBP does not elaborate on this theoretical possibility of a category of ‘ordinary’ biometric personal data (as opposed to the art. 9-sensitive biometric data); albeit, it does give an example (in our view quite dubious)16
The example raises our eyebrows and contributes to a feeling that citizens should get ‘over it’ and accept filming for all kinds of purposes.
EDPB, Guidelines 3/2019 on processing of personal data through video devices, 19: “However, when the purpose of the processing is for example to distinguish one category of people from another but not to uniquely identify anyone the processing does not fall under art. 9. Example: A shop owner would like to customize its advertisement based on gender and age characteristics of the customer captured by a video surveillance system. If that system does not generate biometric templates in order to uniquely identify persons but instead just detects those physical characteristics in order to classify the person then the processing would not fall under art. 9 (as long as no other types of special categories of data are being processed). However, art. 9 applies if the controller stores biometric data (most commonly through templates that are created by the extraction of key features from the raw form of biometric data (e.g. facial measurements from an image)) in order to uniquely identify a person”.
Aside from these EU legal technicalities, it is learnt that in most cases biometric data processing is ‘special’ and restricted in the sense of article 9 GDPR. This also resonates with the ‘other’ European text on data protection, as developed by the Council of Europe, the Convention 108
In 2018, two years after the adoption of the GDPR, the Convention 108 was modernised to address personal data-related challenges emerging from more recent technological developments, as well as to enhance its follow-up mechanism (a critical discussion in: De Hert & Papakonstantinou, 2014). The final outcome of the reform, Convention 108
But where does this finding bring us? A closer look at the article 9 GDPR-prohibition reveals no absolute restriction. The provision suggests a prohibition on the processing of sensitive data, but then allows to bypass the prohibition by requiring explicit consent from the individual (article 9(2)(a)),19
Consent is defined as ‘any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her’ (GDPR, art. 4(11)).
In the past, we have defended the regulatory option that we find in article 9(4) GDPR-creating additional legislation on the specific technology of biometrics (De Hert, 2005; also Alterman, 2003; Campisi, 2013, Chapter 15). Member States should go beyond the GDPR and use this paragraph in article 9 GDPR to introduce restrictive laws on biometrics. Biometric data are the most unique data that data subjects have as individuals. Fraud with raw biometric data is, for example, critical, since one cannot change or fall back on alternative biometric data. A closer look at biometric technologies reveals a wealth of possible policy choices for an Enlighted regulator. In a 2005 study on biometrics, a series of possible clarifications, additional rights and prohibitions were identified to embed biometrics within a foreseeable human rights-friendly framework that recognizes as a starting point a right to biometric anonymity, enhances consumer law with consumer protective ideas for private uses of biometric systems (e.g. consent can never be given immediately at the spot, but is only acceptable after a week of reflection), enriches evidence law with prohibitions to use biometric data in court and regulates the use of these technologies with specific ‘no-go’s’.20
Such prohibitions on possible uses, e.g. for ordinary financial transactions (as opposed to, say, access to ATM machines), for social benefits or employment, or for potentially dangerous uses such as ‘keyless entry’ into hotel rooms; prohibitions on multi-model biometrics; prohibitions on central storing of biometrics; prohibitions on storing ‘raw images’; prohibitions on using financial rewards to promote participation in biometric identification programs; prohibitions on non-encrypted processing and transmitting of biometric data; prohibitions on biometric technology that generates sensible data when alternatives exit; incriminations for theft and unauthorised use of biometric data.
The foregoing shows that only general data protection will not do (good) and that satisfactory regulation of technologies requires a broader pallet of legal techniques in addition to the GDPR tools and in particular bright line rules that are technology-specific and take into account the properties of biometric practices.21
Consumer rights are, for instance, often better enforced and recognized than data protection rules. Consumer law is also an ideal instrument to discourage voluntary biometric schemes, since voluntary schemes have a funny way of turning into compulsory ones in all but name. Consumer law can make it clear that anyone who is asked to voluntarily submit biometric identifiers should be fully informed of the potential risks; competent to understand the impact of their actions; and under no threat of harm to agree to such an action. Harm should be interpreted very broadly here, to include such things as the inconvenience of having to wait in a much longer line. To inhibit pressured or hasty decision-making a waiting period between application and recording of biometric ID’s should be required. This also serves to encourage serious deliberation, and also partially offsets the public tendency to assume that any commercial technology that is permitted by law must not pose a serious risk to a person. Some of these consumer law ideas and ideas taken from other legal areas can be introduced in data protection acts. The Belgian data protection Act contains interesting rules about the reversal of burden of proof and incepts a swift civil law procedure allowing the data subject to obtain judicial review within very short time delays.
The data protection authorities in Europe – always strong defenders of technology neutral data protection rules – are aware of the dangers associated with biometrics and the need for specific rules, a need they try to address with soft law guidance. The EDPB (2019; 2020) in its Guidelines has inserted a noteworthy paragraph ‘Suggested measures to minimize the risks when processing biometric data’ with clear rules on how to apply biometric technologies that move into the direction of the set of bright line rules enumerated above: a suggestion to avoid further use; ownership of the data through decentralization rather than central storage; a distinction between templates and raw data, with as a rule the prohibition of minimization of the use of the latter. Article 9(4) GDPR (discussed above) should therefore be read as an open invitation to national regulators ‘to introduce further conditions, including limitations’ on biometric processing.
The GDPR was introduced in 2016 together with the Law Enforcement Directive. Textual analysis of both documents reveals a government-friendly attitude. Both lay the foundations for state friendly legal ecosystems. The GDPR does this in a general way for all public authorities, whereas the LED only focusses on law enforcement authorities.
The GDPR contains countless exceptions to its rules and principles, by allowing Member States to vote laws to exempt themselves to a fare-going degree from these rules and principles (Hildén, 2019, pp. 180, 187). It also makes sharing of data with law enforcement authorities very flexible. All possible data collected under the GDPR can easily be disclosed to law enforcement agencies based on article 6(1)(c) GDPR that justifies these kinds of transfers and disclosures ‘for compliance with a legal obligation to which the controller is subject’ (EDPB, 2020, p. 15). The Law Enforcement Directive ‘serves’ law-enforcement authorities22 The Directive targets public authorities and private entities performing public functions, related to the ‘prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security’ (LED, art. 3(7)). In this context, the LED can cover data processing at EU and domestic level, including data storage on national databases or relevant processing operations performed by national forces. Contrary to the GDPR (directly applicable to and binding Member States without the need for transposition), the LED requires incorporation into domestic law. The vast majority of the EU Member States today have transposed it. The exception is Denmark (European Union, n.d.). Despite different national traditions and cultures, not much disharmony is expected, in light of the adoption by the LED of the GDPR’s terminology on basic concepts.
Firstly, there is pragmatism in the Law Enforcement Directive about the legality requirements for police-data processing. The Law Enforcement Directive does not expressly specify what kind of legal basis is needed for law enforcement authorities to process data. This Directive applies to ‘prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security’. These are all considered valid and justified purposes. The Law Enforcement Directive does not go into more detail (for instance is ‘prevention’ as important as ‘prosecution’ and ‘where does prevention stop?’).
For all these purposes a legal basis is required (article 8(1)) that at least identifies the objectives of the processing, the personal data to be processed and the purposes of the processing (article 8(2); see also recital 33). However, this legal basis does not necessarily require a legislative act adopted by a national parliament. It can be something else or hierarchically ‘lower’; but this, only if domestic law allows this (‘without prejudice to requirements pursuant to the constitutional order of the Member State concerned’; recital 33).23
Therefore, in principle, unless national constitutions block this, the police does not have to ‘go to parliament’ to organize its data processing. The recitals of the Law Enforcement Directive usefully remind us that law enforcement authorities should also respect the human rights-case law of the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECtHR) on the legality requirements, demanding that the legal basis for privacy infringements in domestic laws, whatever its nature, be ‘clear and precise and its application foreseeable for those subject to it’ (LED, recital 33; De Hert in Mitsilegas & Vavoula, 2021, Chapter 10).
Secondly, there is Law Enforcement Directive-pragmatism about sensitive data and policing. European data protection is very firm about the riskfulness of the processing of certain data-categories. Like the GDPR in article 9 (above), the Law Enforcement Directive recognizes these special categories of personal data (article 10). The Law Enforcement Directive-list is identical with the GDPR-list, including ‘biometric data for the purpose of uniquely identifying a natural person’. However, the similarities halt there. Contrary to the GDPR (biometric processing for the purpose of unique identification is prohibited, albeit with a long list of exceptions), the Law Enforcement Directive stipulates that such a processing ‘shall be allowed’ where certain stringent requirements are met: 1) strict necessity of processing, 2) appropriate safeguards and 3) where authorized by Union or Member State law or when the processing is necessary to protect the vital interests of the data subject or of another person or the processing relates to data which are manifestly made public by the data subject (LED, article 10).
The proposed system allows for the processing of biometric data in various situations and scenarios: national laws can easily be adopted; the Law Enforcement Directive is silent on the meaning of the term ‘vital interests’ (that, in our view, should be read restrictively);24
Recital 46 GDPR offers some clarity on the term ‘vital interests’.
It is noted that art. 11 LED sets out a clear prohibition on discriminative profiling that is based on sensitive data, including biometric data that are processed to uniquely identify an individual. Yet, that law enforcement (or other competent) authorities are bound by a general duty not to discriminate on the basis of such sensitive grounds is, in our view, a requirement stemming from general principles present in any democratic society (rather than a requirement introduced by the LED).
To underpin the foregoing analysis of the EU legal rules on visual/biometric data and on facial recognition, we now move on to two recent cases: Clearview and Bridges. The first case demonstrates the generous scope of European data protection that seemingly always applies and the equally generous enforcement system where NGO-actions are complemented with actions by the EU data protection authorities. The second case reveals that law enforcement authorities seem to enjoy wide discretion in using technologies processing biometric data.
Clearview: In May 2021, several national data protection authorities and organizations (namely, Privacy International, ICO, CNIL, Hermes Centre for Transparency, Digital Human Rights, Homo Digitalis and noyb – the European Center for Digital Rights) filed complaints against Clearview AI, an American, facial recognition-tech firm. The company has created the (allegedly) biggest known database of more than three billion face images. With its automated tools, the firm searches the web and collects human (facial) images. It then stores them on its private repository and sells access to other firms, as well as law enforcement authorities (EDRi, n.d.). A closer look reveals two critical issues.
Firstly, what are the grounds for data collection by this American private firm? Can it rely on consent or legitimate interests or just consider that the processing of images is lawful, because relevant data are manifestly made public? Are the principles of transparency and purpose limitation respected? Although, to Clearview, relevant images come from public-only web sources (like news or public social media), it can be claimed that the fact that something is available online does not necessarily authorise/legitimise anyone to appropriate it in any way they wish (this claim can be supported by the CNIL’s recent order against Clearview to stop gathering data of French citizens and comply with delete-requests, CNIL, 2021).
Secondly, does the use of Clearview’s tool by European law enforcement authorities respect the Law Enforcement Directive and, in particular, the requirement for a proper legal basis and the Article 10-demands on biometric data? We recall that this LED-provision allows processing of biometric data solely under a strict necessity test, with appropriate safeguards in place and if there is a legal basis (Union or national law), when the processing is necessary to protect the vital interests of the data subject or of another person or the processing relates to data which are manifestly made public by the data subject (LED, art. 10).
As we know from case law of the ECtHR and the CJEU, a more rigorous version of the necessity test needs to be applied in cases where the interference with the right to the protection of personal data is serious (De Hert & Bouchagiar, 2021a, pp. 313–316). In the Clearview-case, it seems that the interference is particularly heavy: the private database with more than three billion images is gigantic allowing for a blanket surveillance regime, where everyone, not only criminals or suspects, may be monitored or stigmatised (Neroni Rezende, 2020). Hence, a fully-fledged version of the necessity test would demand: the existence of a pressing social need; suitability and appropriateness of the measure to address the pressing social need; an assessment of whether there are less harmful measures to achieve the desired purpose; and proportionality of the interference to the aim pursued, including a balancing exercise (private versus public interests) and looking for appropriate safeguards.
While the need to effectively fight against crime might constitute a pressing social need, it is extremely doubtful whether the other necessity-requirements, mentioned above, are met. On suitability and the existence of less harmful measures, law enforcement authorities do have various tools, from surveillance cameras to automated fingerprinting or profiling technologies that can assist in effectively fighting against crime. Moreover, on the balancing exercise (public interest versus individual rights), there are neither concrete, detailed rules (eg, on access or deletion) governing the measure (should not only expert police officers be granted access to Clearview’s database? For how long are data retained?) nor adequate safeguards (should Clearview’s database be used for any crime or only for serious offences? Is there adequate, independent review of the measure? Are there organisational and technical measures to ensure security of the system?). At least one author has doubted that Clearview could pass the strict necessity test (Neroni Rezende, 2020).
What else can be learned from the Clearview case? Not much yet. There is nervosity in Europe both about private companies offering this kind of facial recognition tools and about the use by law enforcement authorities of these tools. With regard to both issues the outcome of the legal procedures need to be awaited. What this does show is that Europe’s general data protection rules offer a first grid to assess new technologies. This seems to be lacking in the US. Clearview AI’s primary remaining market is indeed the US, where a lack of federal data protection laws allows it to be taken up by hundreds of law enforcement agencies in spite of its legal issues in certain States. As of 2020, the company was believed to have about 2,200 clients internationally including the FBI, Immigration and Customs Enforcement (ICE) and the Department of Justice (Ikeda, 2021).
The mixed performance of the European framework: Bridges
Moving on to Bridges, a case brought before the England and Wales High Court (the ‘EWHC’), one may see how shaky the European legal framework is and how easy it can be bended to allow European law enforcement authorities to actually do whatever they wish (Stock, 2019). The Appellant was Edward Bridges, a civil liberties campaigner. The Respondent was the Chief Constable of the South Wales Police (‘South Wales Police’). This force made use of facial recognition software (‘NeoFace Watch’), which was developed by North Gate Public Services (UK) Ltd that compares pictures with faces on a ‘watchlist’ derived from a police database of existing facial images. The system was used between May 2017 and April 2019, mostly at large public events, whereby the public was informed via social media messages on Facebook and Twitter, notices on police vehicles, handed out postcard notices to members of the public and information put on its website.
The Divisional Court found no violation of the right to privacy. It was satisfied by, among others: the fact that the processing was (to the court) necessary to conduct a data protection impact assessment; the fact that, where there was no matching to information stored on the police’s database, relevant data were deleted (hence, the principle of data minimisation was, to the court, not prejudiced); the existence of codes of practice and policies; transparency of the pilot project; and human intervention (by police officers) (Bridges, 2019).
However, these considerations were rejected by the England and Wales Court of Appeal. The appellate court found, among others, that the technology deployed by the police was not ‘in accordance with the law’ (as demanded by Article 8 paragraph 2 of the European Convention on Human Rights). Insufficiency of the scheme was supported by ‘fundamental deficiencies’ with the legal regime, granting police officers unfettered discretion, as well as: the novelty of the technology; the fact that it processed data of a large number of individuals, most of whom would be of no interest; the fact that information involved was sensitive; and the fact that the process was automated (Bridges, 2020, paragraphs 86–9l; Tomlinson, 2020; Ryder & Jones, 2020).26
The appellate court also found violation of: the Data Protection Act 2018 (there was no proper data protection impact assessment); as well as the ‘public sector equality duty’ (the police had offered no guarantees that the technology was not biased).
The Bridges case teaches us that European data protection laws, combined with non-discrimination law, have relevance in questioning non-thoughtful implementation of facial recognition in the public area. Judges apply criteria taken from human rights law-related case law (on foreseeability and accessibility of the legal basis and proportionality) in conjunction with the rules of data protection and English non-discrimination law on impact assessments and transparency. All that does not amount in itself to a formal and absolute ‘no’ to facial recognition. On the contrary, the ease with which the Divisional Court found the South Wales Police compatible with these legal requirements and rules confirms our earlier finding that notwithstanding the obvious tensions with principles on legality, accountability, minimization, more is needed to take away all ambiguity of this technology. A similar position is taken by Wojciech Wiewiórowski, the EDPS, who runs through all objections against facial recognition (especially having regard to the principles of data minimization and data protection by design and by default), but calls for complementary regulatory steps, possibly including a ban or at least a moratorium, since there is clear political pressure in Europe to deploy and use the technology resulting in projects and police use that take advantage of all loopholes and vagueness in the European data protection rules. Crucial is his understanding that ‘there was no great debate on facial recognition during the passage of negotiations on the GDPR and the law enforcement data protection directive’ (Wiewiórowski, 2019).
The EDPS has rightly understood the necessity of its intervention and the necessity for a regulatory debate. The EU has not hesitated to create a legal basis for a full use by border and law enforcement authorities of biometric visual data in the migration and border policies.27
On migration policies, the EU seems to have opened all gates. In September 2020, the EC released the amended Eurodac proposal as part of the New Pact on Migration and Asylum (European Commission, 2020a; 2020b). Focusing on migrants-surveillance (in particular, digitisation of migration management and border security), the EC appears to pay special attention to minors; this, by proposing a lower age-threshold for biometric data collection (6 years old, instead of 14; a discussion in: Marcu, 2021). The target is not only vulnerable groups (like minors), but everyone. These developments and the case law quoted in the next footnote are particularly relevant, in light of recent discussions on digital green certificates and vaccine passports (European Commission, 2021b). And they can support the argument that, instead of having national limitations to certain uses of technologies (on the basis of, for example, the above-analysed open invitation of art. 9(4) GDPR), what national lawmakers enjoy is the power to create more legal grounds allowing for data tsunamis to be processed.
First, in Bochum, the CJEU considered the storage of fingerprints within the passport lawful, under the Regulation (EC) No 444/2009, for the purpose of preventing illegal entry into the EU; and it clarified that this Regulation does not in itself allow for centralised storage of such fingerprints. Then, in Willems, the Court appears to grant broad discretion to national legislators in setting out the legal basis for the establishment of centralised databases. See: Michael Schwarz v Stadt Bochum, paragraph 61 (‘(…) The regulation not providing for any other form or method of storing those fingerprints, it cannot in and of itself, as is pointed out by recital 5 of Regulation No 444/2009, be interpreted as providing a legal basis for the centralised storage of data collected thereunder or for the use of such data for purposes other than that of preventing illegal entry into the European Union (…)’); W. P. Willems et al. v. Burgemeester van Nuth et al., paragraphs 9, 48 (‘(…) In accordance with Recital 5 in the preamble to Regulation No 444/2005, which amended Regulation No 2252/2004: ‘Regulation … No 2252/2004 requires biometric data to be collected and stored in the storage medium of passports and travel documents with a view to issuing such documents. This is without prejudice to any other use or storage of these data in accordance with national legislation of Member States. Regulation … No 2252/2004 does not provide a legal base for setting up or maintaining databases for storage of those data in Member States, which is strictly a matter of national law.’ (…) It follows, in particular, that Regulation No 2252/2004 does not require a Member State to guarantee in its legislation that biometric data will not be used or stored by that State for purposes other than those mentioned in art. 4(3) of that regulation (see, to that effect, judgment in Schwarz, C-291/12, EU:C:2013:670, paragraph 61) (…)’). These considerations may run counter to case law of the ECtHR and, in particular, Marper’s holding that the retention of fingerprints and DNA-related information of individuals suspected (but not convicted) of certain crimes was a disproportionate interference with the right to privacy.
“The purposes that triggered the introduction of facial recognition may seem uncontroversial at a first sight: it seems unobjectionable to use it to verify a person’s identity against a presented facial image, such as at national borders including in the EU. It is another level of intrusion to use it to determine the identity of an unknown person by comparing her image against an extensive database of images of known individuals”.
The foregoing teaches us that, under the EU legal framework, the regulation of biometric data is well organized via hard law, binding private and public entities, as well as soft law, offering useful guidance on the interpretation of relevant biometric provisions. Still, the EU framework has been voted in times without facial recognition controversies and is clearly lacking a political message on the limits of this technology, in particular about biometric technologies in specific contexts, such as surveillance practices via face recognition deployed by law enforcement. The UK Surveillance Camera Commissioner has recently stressed insufficiencies of the GDPR, as well as the need for primary law (and an, until then, moratorium) to regulate face recognition-implementations by the police (Linkomies, 2020, pp. 14–15):
‘I have asked for a fundamental review on surveillance. The GDPR is not enough. Primary legislation is required for this field,” (…) For now, a temporary moratorium is needed. Facial recognition is only good if used in certain circumstances’ (own emphasis).
In addition to the remarks made by EDPS-head Wiewiórowski (discussed above), three points can be highlighted to back up the ‘GDPR is not enough’ and ‘we need at least a moratorium’-positions. First, the GDPR’s abstract provisions may indeed fail to address particularities of specific technological implementations in national contexts. With the GDPR and the Law Enforcement Directive, the EU legislator has opted for a broad regulatory framework (Lynskey, 2015, p. 10) that is binding Member States. The baton is by necessity passed to national regulators, but in particular the GDPR does not give them powers to prohibit in general certain uses of technologies. Hence, all the weight falls on the shoulders of data protection authorities that either do not act or act in a non-coordinated way with confused outcomes (see on dash-cams, Štitilis & Laurinaitis, 2016). It would make sense, as pointed out by the UK Commissioner, to have primary law, such as an Act of the UK Parliament,30
For examples of primary legislation in the UK, see UK Parliament. (n.d.). We assume that the UK Surveillance Camera Commissioner was not referring to primary law in the EU context (a treaty). For the EU primary and secondary legal instruments, see Craig & De Búrca (2015, p. 103 et seq.).
Compare: LED, art. 10 (using the terms ‘shall be allowed’) with GDPR, art. 9(1) (using the phrasing ‘shall be prohibited’). Still, it is reminded that the LED allows biometric data processing only if strict conditions are met.
In our view, there is a need for more specific rules and bright line bans. As analysed above, both the GDPR and the Law Enforcement Directive in general do not embrace the concept of prohibitions on certain technologies. Imagine what would happen if such a step stifled innovation or hampered governmental policies. Remarkably, the EU seems to opt for bans, not through lex generalis (like the GDPR) but, via lex specialis; and this is law distant from the data protection key instruments, like the Digital Services Act.32
To illustrate, we refer to the EDPS’ recent opinion calling for a ban on surveillance-based targeted ads having regard to the Commission’s Digital Services Act (DSA). We welcome the EDPS’ opinion and find it particularly promising in an era: when advertising and commercial surveillance can lead to, among others, manipulation, undue discrimination or privacy and data protection breach; when the use of certain technologies, such as systems visually tracking and capturing biometric data (physiological responses), can disproportionately interfere with fundamental rights; and when receiving commercial services in exchange for personal data (including biometric data) is expressly permitted under the Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services. See European Parliament & Council, 2019, art 3(1) (‘(…) This Directive shall apply to any contract where the trader supplies or undertakes to supply digital content or a digital service to the consumer and the consumer pays or undertakes to pay a price. This Directive shall also apply where the trader supplies or undertakes to supply digital content or a digital service to the consumer, and the consumer provides or undertakes to provide personal data to the trader (…)’). Interestingly, similar proposals on banning surveillance advertising have been submitted by the US privacy, consumer, competition and civil rights organisations. See, among others, Lomas (2021b). On the EDPS’ opinion, see EDPS (2021a; 2021b); European Commission. (2020c); Lomas (2021a). On surveillance-based ads, see: Forbrukerradet (2021).
Art. 11 LED could serve as a model. This provision sets out a clear prohibition on discriminative profiling that is based on sensitive data, including biometric data that are processed to uniquely identify an individual.
See the EDPB and the EDPS’ 5/2021 Joint Opinion on the proposal for an Artificial Intelligence Act, where they stressed the lack of concrete prohibitions (European Commission, 2020d; EDPB & EDPS, 2021a, pp. 2, 3, 11, 12; 2021b). This proposed Act forbids (in art. 5): the ‘placing on the market, putting into service or use’ of AI-technologies that affect people’s behaviour in a way that leads or may lead to individual ‘physical or psychological harm’; the ‘placing on the market, putting into service or use’ of certain AI-technologies by public actors; and the use of certain biometric technologies in public areas for law enforcement goals (unless this is considered necessary, for example, to prevent crime). In our opinion, these prohibitions are not real prohibitions (see also Malgieri & Ienca, 2021). Rather, it is a system tolerating various biometric implementations, like the selling of technologies by European designers to third countries or the use of systems processing biometrics after-the-fact (not ‘real-time’) or for purposes other than law enforcement (such as public health) (Veale & Borgesius, 2021). In addition, the requirement of ‘physical or psychological harm’ of a person excludes situations where the detriment goes beyond the individual level or is difficult to be connected with a concrete person or occurs at another level. Experience from the environment has taught us how the public, the people (rather than a concrete individual) may suffer detriment (including future detriment) due to certain practices not targeted at a particular person. Moreover, the prohibition related to law enforcement is conditional upon many if’s and may not apply where the AI-practice is deemed necessary to achieve certain goals, like crime detection. Interestingly, although the proposed Act provides for a detailed authorisation process (including an independent body), this rigorous process may not apply in certain cases of urgency (art. 5); and, in any event, Member States are granted discretion in permitting by law such AI-practices in the law enforcement context (art. 5).
In light of the above analysis, the following regulatory starting points could be taken into account by the European lawmakers to strictly regulate surveillance practices:35
These starting points are partly lessons learned from our working paper on US laws. For a more detailed analysis of these laws (twenty in total), see De Hert and Bouchagiar (2021b).
Concreteness: The EU misses unambiguous clarity: legal provisions having clear objectives and targeted at concrete technologies. Its tech-neutral regime, based on generic rules and abstract principles, can, on the one hand, be positive as covering any technology (and reducing workload of lawmakers). However, on the other hand, it lacks enhanced legal certainty. Can developers of face recognition tools really foresee the way in which and the circumstances under which the law may apply to their case and be more certain about whether and how to enter the market? Are individuals subjectable to face recognition truly aware of how they may be protected against any possible violation? In our view, no. Examples from other jurisdictions can demonstrate that a piece-meal regulation can better establish and maintain legal certainty. In the US, for instance, legal provisions analytically describe the ‘whether’s’ and the ‘how’s’ regarding entering the market or the protection of individuals concerned.36 By way of example, see section 20 of the California Privacy Rights Act (2020) addressing businesses and imposing various duties, from cybersecurity audits to regular reporting and risk-evaluations.
Precision: The EU legal scheme lacks precision in key topics and, in particular: consent, the duty to inform, function creep and absence of certain prohibitions/duties. On consent, the EU focuses on specific requirements, like that consent be informed or specific; yet, further demands could make our regime more individual-friendly. For instance, special attention could be paid to the will of the person under surveillance; does she/he consent with independent, uninfluenced and genuine will? The EU could learn from other legal instruments, like the (US) federal 2020 National Biometric Information Privacy Act aiming to tackle biometric data exploitation by private entities and requiring consent be ‘informed’, ‘specific’ and so forth (terms also present in the GDPR), but also focusing on the independent, genuine will of the person concerned, who must be free from outside control.37
See 2020 National Biometric Information Privacy Act, sections 3(b)(1) and 2(4).
Again, lessons could be learnt from the US, such as the Washington’s Engrossed Substitute Senate Bill 6280 (section 3(1) and (2)), imposing duties to prepare accountability reports.
See, for example, the proposed federal 2019 Commercial Facial Recognition Privacy Act (section 3(a)(3)); Virginia Senate Bill 1392, section 59.1–574; or Washington’s Engrossed Substitute Senate Bill 6280, section 3(7).
For such prohibitions in the US, see, among others: the proposed Commercial Facial Recognition Privacy Act of 2019, section 3(a)(2); the proposed 2020 Genetic Information Privacy Act, section 56.181(e); Virginia Senate Bill 1392, section 59.1–574; Washington’s Engrossed Substitute Senate Bill 6280, section 6; or New Jersey’s Assembly Bill 989.
Such bans exist in the US. See for instance: the federal 2020 National Biometric Information Privacy Act, sections 3(c) and 3(d); the 2008 Illinois Biometric Information Privacy Act, section 15(c); or Texas Business and Commerce Code (Sec 503.001), point (c).
Such standards can be found in the US: the federal 2020 National Biometric Information Privacy Act, section 3(e); the 2008 Illinois Biometric Information Privacy Act, section 15(e); or Texas Business and Commerce Code (Sec 503.001), point (c).
Bright-line bans: The EU lacks bright-line rules. To support the argument for the need for such rules, one may refer to the US regime, where explicit prohibitions on certain technological uses or surveillance practices can be detected. Remarkably, in the US, certain principled prohibitions reach the level of unconditionality: a Californian Act prohibits in an absolute manner all law enforcement actors from engaging in biometric surveillance through cameras;43
California’s Assembly Bill No 1215 (section 2(b)).
Section 3(a) and (b) 2020 Facial Recognition and Biometric Technology Moratorium Act.
Practical organisation of remedies: Although the GDPR and the LED recognise the right to access to justice and the right to an effective remedy, first, they do not go into detail on how one may exercise these rights and, second, the approach is monolithic, solely based on data protection law. Contrary, the US seem to better clarify their justice-routes, as well as combine several legal fields with a view to enhancing effectiveness of their remedy-scheme. To illustrate, the proposed federal 2019 Commercial Facial Recognition Privacy Act expressly classifies any breach of its provisions as unfair or deceptive act/practice;45
See section 4(a).
To sum up, the analysis of the GDPR and the LED (in section 2) showed that the EU’s general and tech-neutral approach may result in uncertainty. Vagueness was further demonstrated by the discussion (in section 3) in particular relation to the processing of biometric data: this processing is, on the one hand, prohibited and, on the other hand, permitted under a long list of exceptions. Thereafter, although a dynamic reading and interpretation of Article 9(4) of the GDPR (as argued in section 4) could lead to the introduction by national lawmakers of domestic measures to limit or ban certain technological implementations, to our knowledge, no national regulatory entity has relied upon this provision to prohibit risky and intrusive technological implementations. In addition, police-friendliness of the LED (analysed in section 5) that is, moreover, permissive regarding the processing of biometric data might heavily interfere with the right to privacy, especially in an era when law enforcement-databases may be expanded to include more and more sensitive data (like face recognition data; see: EDRi, 2021). The situation becomes acute where law enforcement is directly granted access by private firms to sensitive items of information in the absence of a clear legal basis, as well as where police enjoy more and more power and discretion in using technologies to process biometric data (as shown in sections 6 and 7 on Clearview and Bridges). Reflecting on the need for bright line rules (an argument that was substantiated in section 8), this contribution submitted four recommendations (section 9) to enhance protection of the citizens by framing surveillance practices in a more satisfactory way:
Concreteness, in the sense of unambiguous clarity; laws with clear objectives, targeted at concrete technologies. Precision regarding the EU’s consent-scheme; the duty to inform the data subject; function-creep; and certain duties/prohibitions, such as bans on using technologies to unduly discriminate against citizens. Bright-line bans on certain surveillance practices that can reach the level of unconditionality. Practical organisation of remedies, meaning detailedness on how to exercise the right to an effective remedy and combination of several legal fields (like competition law) to more effectively exercise this right via different possible judicial avenues.
By looking at various jurisdictions (such as the US referred to above), it seems that there is a common interest in regulating surveillance-related practices and technologies. Although principles, ranging from accountability and transparency to reason-giving, appear present in different laws/jurisdictions, some regulators, like in the US, have gone further to describe in more detail how to apply such principles to concrete technological contexts. In light of the above considerations and given that the EC has recently been urged to permanently ban biometric mass surveillance practices, the EU could be inspired by jurisdictions, like the US, and set out some bright-line rules to protect its members, its citizens, from bad things that are likely to occur. After having established a broad regime that could, at least in theory, cover any specific technology or general purpose-technologies, the EU could further opt for specific rules on concrete technologies with a view to tackling urgent (including near-term and practical) issues posed by contemporary implementations and threatening fundamental rights and freedoms of citizens.
It is clear from the foregoing, we believe, contrary to some commentators,46
For example, in a 2018 publication, Meyer (2018) advocates for supremacy of EU data protection laws (compared to the US regime). In our opinion, the arguments set forth are inadequately substantiated: alleged ‘slapping’ with ‘stratospheric fines’ can also be the case in the US (and may hardly work as counterincentive in case of Internet giants engaging in serious breaches); the risk based-perspective (adopted by the EU legislator in data protection areas) is often emphasised over the claimed (by Meyer) human rights-approach; and, on the Nazi-argument, one may hardly draw linkages between the ‘collective’ memories of war victims and their contribution to the drafting of the GDPR.
First, the EU legal regime on surveillance could become honestly prohibitive. It could identify risks that might materialise; and it could say with clarity ‘no’ to technologies, whose use can be dangerous and harmful in specific high-risk areas. Neither the GDPR nor the Law Enforcement Directive adopt such an approach. These EU legal instruments probably fail to detect ‘bad’ uses of technologies that should be prohibited and to distinguish them from others.47 In this regard, see Štitilis and Laurinaitis (2016, pp. 323–325), where the idea of banning dashcams (in the EU context) is almost laughed at. See also Bennett and Grant (1999, Chapter 2, p. 39), discussing fair information practice norms that may fail to offer sufficient criteria to determine ethical acceptability of processing operations. For the EU legal regime as a ‘legitimizing’ framework, as well as arguments for (and against) its ‘permissive’ nature, see Lynskey (2015, pp. 30 et seq.).
To conclude, in the EU, regulators have remained focused on the very law of ‘data processing’, instead of citizen-protection. It may be advisable to look at the broader picture. The issue of surveillance exercised by anyone (be it the state, the firms or the very citizens) over anyone (including vulnerable groups, like kids) is not just a matter of data protection law. Rather, it is a fundamental question on the kind of society that people want to live in.49
In this regard, see: Klovig Skelton (2021).
For recommendations to open up the list of sensitive data, see: European Parliament (2021).
As aptly put by Snowden (n.d.), ‘arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say’.
Footnotes
Acknowledgments
Supported by the Luxembourg National Research Fund FNR PRIDE17/12251371.
The Open Access publication of this paper was supported by the Panelfit project. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 788039. This document reflects only the author’s view and the Agency is not responsible for any use that may be made of the information it contains.
Authors biographies
Paul De Hert’s work addresses problems in the area of privacy & technology, human rights and criminal law. A human rights approach combined with a concern for theory is the common denominator of all his work. In his formative years, De Hert studied law, philosophy and religious sciences (1985–1992). After a productive decade of research in areas such as policing, video surveillance, international cooperation in criminal affairs and international exchange of police information, he broadened up his scope of interests and published a book on the European Convention on Human Rights (1998) and defended a doctorate in law in which he compared the constitutional strength of eighteenth and twentieth century constitutionalism in the light of contemporary social control practices (‘Early Constitutionalism and Social Control. Liberal Democracy Hesitating between Rights Thinking and Liberty Thinking’ (2000, Promoter: Prof. dr. Bart De Schutter (VUB)). De Hert has (had) a broad teaching portfolio: Past: ‘Human Rights’, ‘Legal theory’, ‘Historical constitutionalism’ and ‘Constitutional criminal law’. Currently, at Brussels, ‘Criminal Law’, and ‘International and European Criminal Law’ and at Tilburg University, ‘Privacy and Data Protection’. He is Director of the Research group on human rights (FRC) and Vice-Dean of the Faculty and former Director of the Research group Law Science Technology & Society (LSTS), and of the Department of Interdisciplinary Studies of Law. He is board member of several Belgian, Dutch and (other) international scientific journals such as The Computer Law & Security Review (Elsevier), The Inter-American and European Human Rights Journal (Intersentia) and Criminal Law & Philosophy (Springer). He is co-editor in chief of the Supranational Criminal Law Series (Intersentia) and the New Journal of European Criminal law (Sage). Since 2008 he has edited with Serge Gutwirth, Ronald Leenes and others annual books on data protection law (before Springer, now Hart) that, -judging sales numbers, quotations and downloads, attack a massive readership and have contributed to creating the legal, academic discipline of data protection law. De Hert is now series editor of The Computers, Privacy and Data Protection series, now published by Hart.
Georgios Bouchagiar is a doctoral researcher in criminal law and technology at the University of Luxembourg and the Free University of Brussels. He holds a Law degree (Athens Law School 2011), a Master of Science degree in Information Technology (High Honours, Ionian School of Informatics and Information Science 2018) and a Master of Laws degree in Law and Technology (With Distinction, Tilburg Institute for Law, Technology, and Society 2019). After an 8-year-period of practicing Information Law, he engaged in the academia. Since 2018, his professional experience has included tutoring and lecturing on Information Law and General Principles of Law (Ionian University 2018); research on Information Law and Distributed Ledger Technology (University of Amsterdam/University of Antwerp 2018); and practice on face recognition and spying technologies (Tilburg University 2019).
