Abstract
With the rise of algorithmic management, the deployment of AI surveillance has proliferated in the modern workplace. AI surveillance relies on advanced computational methods to draw statistical inferences about workers from their data. These inferences are subsequently used by employers to inform various organisational and managerial decisions. This commodifies workers into statistical entities, which objectifies and instrumentalises the value of human work as a series of data points for algorithmic analysis. The article explores how this transformation impacts the precarious role of privacy in the employment context, which already navigates the inherent informational imbalances that arise from the structural subordination of workers to employers and are further exasperated by AI surveillance. It focuses on the remedies of EU law that instantiate specific notions of informational control as a core ingredient to privacy and data protection. This involves consideration of the General Data Protection Regulation, as well as the jurisprudence of Articles 7 and 8 of the European Charter of Fundamental Rights, and Article 8 of the European Convention of Human Rights. The article argues that a greater emphasis on workers’ informational control consolidates existing privacy protections and mitigates the systemic risks of data-driven commodification.
Introduction
Historically, surveillance has always been regarded as contributing to the structural subordination of workers by monopolising information into the hands of the employer. 1 AI surveillance has intensified this imbalance through its use of predictive analytics, which grant employers near-limitless abilities to make statistical inferences about workers. These inferences can be used for all sorts of purposes, such as to assess the behaviour, performance, concentration levels, or even career trajectories of workers, the effect of which is to commodify workers into mere statistical entities, thereby reconceptualising the value of human work as abstractable data points. However, this often results in workers losing both factual and legal control over how their data is accessed and used by employers. It also signals the rise of surveillance capitalism as a prevailing economic paradigm, in which information is transformed into a valuable commercial asset for trade and monetisation. 2 Within this paradigm, privacy becomes complicated, if not compromised, as a legal concept and remedy.
To this end, the article studies the social acquis of EU privacy law to assess how it regulates the distribution of informational control between workers and their employers. By positioning its examination of privacy law within the remit of informational control, the article explores the effectiveness of current legal safeguards in mitigating the systemic risks of data-driven commodification that are associated with AI surveillance. The introduction is followed by four sections. Section 2 briefly lays out how AI systems differ from traditional surveillance systems. Section 3 explores how the General Data Protection Regulation (GDPR) captures many important aspects of informational control but does not fully address the operational complexities of AI surveillance or its impact on the rights and interests of workers. 3 Section 4 therefore considers a more rights-centric approach. It compares the privacy and data protection rights in Articles 7 and 8 of the Charter of Fundamental Rights (CFREU/Charter) with the right to private life in Article 8 of the European Convention of Human Rights (ECHR/Convention), and inspects the symbiotic development of these legal sources in the judicial dialogue between the Court of Justice (CJEU) and the European Court of the Human Rights (ECtHR). 4 Section 5 concludes.
Distinguishing AI from traditional surveillance
AI surveillance typically describes a cluster of different computational techniques, including machine learning algorithms, rule-based systems, and natural language processing, that generate statistical inferences. These come together as a multi-modal system that informs various managerial and organisational decisions on matters such as working conditions, resource allocation, contract and wage negotiations. 5 At each stage of the data lifecycle, workers lose progressively more control over their data. From the outset, AI surveillance challenges workers’ abilities to provide free and informed consent, since it usually operates in the background, leaving many unaware of the fact that they are being monitored. These challenges continue at the processing stage, where many AI systems lack transparency and explainability, as well as the decision-making stage, where workers are profoundly affected by their outputs.
Prior to the commencement of data processing, the AI system needs to collect input data that will run in its model. At this stage, employers are granted unprecedented access to the data of their workforce, often including geographical, physical, sensory or emotional data, that is naturally sensitive, or otherwise becomes sensitive when processed. This is often sourced from the live monitoring of workers’ computer screens, keystrokes, social messages, and emails. 6 Similarly, biometric, sociometric, or GPS data can be collected from wearable health and fitness to RFID tracking devices, smart glasses or phone sensors. 7 These can monitor workers’ behaviour patterns at all times of the day through advanced accelerometers, triangulation algorithms and Bluetooth devices. 8 Surveillance is therefore no longer tethered to the physical workplace but often portable, remote, and interoperable. 9 The same applies to the usage of AI-powered web cameras in remote working arrangements that process data retrieved from the images of the worker's home and family. 10 With these applications, there are no limits as to the duration, frequency and time of monitoring and data processing, with the possibility that workers may be monitored around the clock, irrespective of whether they are on or off duty. 11
Data collection is typically followed by processing, where AI surveillance generates detailed profiles on workers, by drawing statistical inferences from their data points to predict future actions and behaviours. Even if the type of data collected is not necessarily sensitive, the subsequent processing thereof may create sensitive data. For instance, deep learning algorithms such as recurrent neural networks are commonly used in natural language processing to evaluate social interactions on messaging platforms and emails to prognose an individual's emotions, feelings, or behaviour patterns of workers, or estimate their social affiliation or propensity to join a particular group or culture. 12 Similar results can be achieved with other deep learning methods, such as convolutional neural networks that use facial recognition techniques. 13 These processes can infer problematic correlations, particularly where they identify patterns that are spurious or otherwise influenced by biases in a dataset. 14 Emotion AI is notoriously controversial due to the high propensity of prediction inaccuracy. 15 More generally, prediction accuracy can differ amongst different social groups, raising additional discrimination issues. 16 For example, monitoring workers’ productivity by their speed of completing tasks and allocating work to them on this basis may discriminate against individuals who take longer to complete tasks because of an underlying health condition or disability. The question of how the algorithm, if at all, accounts for these adversities, remains even more polemic.
Once the data is processed, it can be used in various organisational and managerial decisions, ranging from hiring, firing, pay, work allocation, task management, supervision, and other types of decisions, depending on the intentions of the employer. 17 How, and why, the algorithm correlates a variable with an indicium of a particular predictor is at the mercy of the programmer and, potentially, the idiosyncratic preferences of the employer. Take the case of wearable fitness trackers, that grant employers access to raw health data. Data of this kind, which includes biometric information such as heart rates, sleep patterns or activity levels, can determine whether a worker is fit or suitable for a physical task, as well as predict sickness, pregnancy, or time off requests. 18 These uses can also be repurposed by employers, such as to leverage bargaining powers, manipulate workers’ productive output, or restructure working arrangements. This implicates workers beyond the remit of privacy rights, possessing serious effects on their related rights to collective bargaining, non-discrimination, and fair treatment. 19 Most workers are left in the dark about these ulterior uses, or may have only consented to its original usage. Concerns like these prompted the Dutch Data Protection Authority to shut down a company's pilot scheme that required workers to wear Fitbits for data processing purposes in the Netherlands. 20
The promises and pitfalls of the General Data Protection Regulation
While the GDPR advances many important aspects of informational control, it does not account for the operational complexities of AI surveillance, which dilutes the practical impact of its regulatory protections. This becomes apparent when considering the two main applications of the GDPR to AI surveillance. First, the GDPR regulates when data processing is lawful, by reference to the general principles enshrined in Article 5 and the lawful basis criteria in Article 6 for non-special categories of data, as well Article 9 for special categories of data. Second, the GDPR provides remedial rights to data subjects that can be enforced against the controller in Chapter 3. The rights most relevant to the abilities of workers to control their information are the right to erasure in Article 17, as well as the right not to be subject to solely automated decision-making in Article 22.
Lawfulness of AI surveillance
AI surveillance is permitted insofar as it has a lawful basis pursuant to the grounds set out in Article 6(1) and complies with the principles in Article 5(1) GDPR. The three provisions in Article 6(1) that are most relevant to the employment context are Article 6(1)(a), which applies where a data subject has consented to data processing; Article 6(1)(b), which applies where the processing is necessary to the data subject's performance of – or entering into - a contract; 21 and Article 6(1)(f), which applies where the processing is necessary for a legitimate interest pursued by the controller or a third party unless this is overridden by fundamental rights concerns. The principles in Article 5(1) go hand in hand with the lawful basis criteria. These are ‘lawfulness, fairness and transparency’ in Article 5(1)(a), ‘purpose limitation’ in Article 5(1)(b), data minimisation in Article 5(1)(c), ‘accuracy’ in Article 5(1)(d), ‘storage limitation’ in Article 5(1)(e), and ‘integrity and confidentiality’ in Article 5(1)(f). Finally, where data processing involves special categories of personal data, businesses must be able to meet one of the exceptions of Article 9(2) GDPR for the processing to be lawful.
In relation to Article 6(1)(a), the Article 29 WP notes that workers are only seldom able to provide free and informed consent due to their contractual subordination to their employers. 22 There may be social or organisational pressures to opt in, or to abstain from opting out. In the context of AI surveillance, consent is even more problematic, particularly where data processing is highly complex or abstract, or where opaque algorithms are used. This has been recognised by the Italian Supreme Court, who found that a data subject's ability to consent is premised on there being sufficient transparency in the algorithmic decision-making process. 23 The decision, although not originally based on them, suggests that similar a conclusion would hopefully be derived from the general conditions for consent laid out in Article 7 and Recitals 32–33 of the GDPR.
Employers may contend that AI surveillance is indispensable for business operations to meet the criteria of Articles 6(1)(b) and 6(1)(f). 24 However, there are limits as to how far regulators, courts and oversight bodies will entertain these arguments as amounting to a lawful basis. The French privacy regulator recently fined a subsidiary of Amazon for its use of scanners that collected information that was associated with the identity of its warehouse workers. 25 Although this could in principle amount to a legal means of monitoring workers, it was held that such was not applicable in the case at hand, since the scanner calculated, inter alia, whether a particular individual processed a warehouse item within 1.25 seconds, or interrupted their task for longer than 10 minutes. This amounted to excessive data collection, contravening Articles 5(1)(c) and 6(1)(f). A similar finding can be seen in the SyRi litigation, which concerned the Dutch government's use of a fraud prediction algorithm where the data processing principles of the GDPR were discussed in the context of an alleged breach of Article 8 ECHR. The Dutch court found the predictive algorithm's reliance on huge pools of personal data, when making predictions, to contravene Articles 5(1)(c) and 6(1)(f). 26 A comparable decision has been issued by the Italian Garante, implicating that the partial anonymisation of the data by the algorithm post data collection will not suffice to legitimise AI surveillance. 27
This assumes that regulatory compliance can be achieved without any trade-off in the operation of the AI system, which is rarely the case. Some of the GDPR principles, particularly the principles of data minimisation and purpose limitation, are difficult to reconcile with the operational realities of AI surveillance, which often depend on large datasets, and the ability to repurpose data for evolving objectives.
For instance, Article 5(1)(b) requires that data is collected for a defined or specified objective. Yet, many of the computational methods of AI surveillance, such as machine learning, are designed to autonomously identify new patterns from a dataset that may not have been foreseen or anticipated, particularly where feature learning methods are deployed. 28 Although Article 5(1)(b) is thought to refer to the employer's stipulated intent, it is practically possible for such to become blurred. 29 For instance, imagine a company installs facial recognition technology on its premises to ensure that workers do not enter restricted access areas. Without even being programmed to do so, the system may identify particular movement patterns that can generate inferences about the activity levels of workers. These can subsequently be used to profile the workforce by associating activity levels with estimated productivity scores. This impacts Article 5(1)(b) and adds further complications to the issue of consent under Article 6(1)(b), since a worker may have consented to the facial recognition technology for security reasons, but not necessarily for performance or activity monitoring.
Similarly, Article 5(1)(c) GDPR is premised on the understanding that less is more when it comes to data processing. However, this is not the case when it comes to AI surveillance. Rather, these systems often rely on huge volumes of data to obtain the desirable predictive accuracy and reduce the risk of error. 30 Yet, a large dataset may violate the data minimisation principle. Paradoxically, a smaller dataset reduces the precision rate of the algorithmic prediction, which conversely increases the risk of Article 5(1)(d) being violated. The same applies to the anonymisation of data, which – as confirmed by the Italian Garante – is a constitutive component of the data minimisation principle. 31 However, anonymisation involves programming techniques such as randomisation, adding noise, or removing certain features from variables, which can obfuscate patterns in a dataset and affect the predictive accuracy of the model. 32 Additional complications arise from proxy variables, since these may associate personal or sensitive categories of data, even if the actual data point is not based on sensitive data. 33
Rights of workers over AI surveillance
The right to erasure is a crucial aspect of informational control and can be found in Article 17(1)-(3), as well as Recitals 65 and 66 GDPR. However, Article 17 does not explicitly state what data – and aspects thereof – must be removed, which significantly impacts its practical effects in the context of AI surveillance. It is a moot point whether Article 17 simply requires a controller to delete the disputed data or whether it requires more invasive measures that may include overwriting variables or retraining the entire model. Specific variables do not often represent atomistic data points that are extractable without leaving traces. Even if data is erased, the features the algorithm has learned remain in the model, especially with machine learning algorithms and neural networks, since these learn by adjusting their parameters to minimise the difference between target variables and the actual output variables. 34 Further issues arise from the stochasticity of different models, making it difficult to trace the significance a model assigns to any given data point. In addition, the removal of data may also endanger the integrity of the remaining data in the model. 35 Evidently, for Article 17 to provide a meaningful degree of control to workers, the right would require the training data to be amended, or a different training method that does not implicate the disputed data to be implemented. The outcome in this scenario would be equal to changing the input data. 36
Other aspects of informational control can be identified in Article 22, which confers ‘the right not to be subject to a decision based solely on automated decision-making’. However, not all types of AI surveillance will be captured by Article 22(1)-(4), but only those involving a solely automated decision and which produce legal or similarly significant effects. For instance, using AI surveillance to scan worker communications to provide real-time updates on the productivity levels - or even in more invasive cases, on the concentration levels - of the workforce will not be captured by Article 22. But if AI surveillance scans communications to automate the managerial decision of who should be promoted, demoted, let go or fired, then this will be captured by Article 22. 37 The protection of Article 22 is therefore contingent on the degree of automation of the algorithm. 38 If there is human intervention, Article 22 may not apply, even if such intervention only amounts to a mere formality. However, the proviso must be added that recent CJEU decisions have taken a more expansive view of Article 22. 39 This is to avoid the legal requirement otherwise becoming an unfortunate escape route by which businesses can evade Article 22, considering that in such a context, the algorithm may not be the sole decision-maker, despite performing the substantial amount of decision-making.
Towards a rights-based approach
These regulatory tensions underline the need for a more flexible approach that can adapt to the increasing computational capabilities and informational demands of AI surveillance, to ensure a meaningful protection of informational control. Evidently, the GDPR is a floor, not a ceiling, and requires a degree of contextualisation against the broader framework of data protection and privacy rights offered in EU primary law. 40 Amongst such, the Charter has a direct and binding influence on the GDPR, providing a framework to which it explicitly references and adheres. 41 In particular, Articles 7 and 8 CFREU provide fundamental rights that confer privacy and data protections, with the latter detailing many significant aspects of informational control in its legislative text. In comparison, the Convention influences the broader human rights context within which the GDPR operates by offering a more expansive and adaptive approach to privacy in Article 8 ECHR, which has allowed the ECtHR to recognise a right to a form of informational self-determination in its case law. These rights are synergetic and undergo constant evolution in the jurisprudential dialogue between the CJEU and the ECtHR, thereby informing the progressive development of secondary data and privacy laws. 42
Articles 7 and 8 of the European Charter of Fundamental Rights
The relationship between the GDPR and Articles 7 and 8 CFREU is pre-empted by the influence of the Charter on the 1995 Data Protection Directive, as is apparent in the seminal CJEU decisions in Google Spain and Google, Digital Rights Ireland, and Schrems.
43
These cases are evidence of the extent to which regulatory instruments are normatively positioned with the fundamental rights jurisprudence, for example, Article 1(2) and Recitals 1 and 10 GDPR.
44
As observed by Advocate General Collins: ‘In the light of the objective set out in Article 1(2) of the GDPR to protect fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data, for as long as the conditions governing the legal processing of personal data under that regulation are fulfilled such processing meets the requirements of Articles 7 and 8 of the Charter’
45
These limitations are to some extent reflected in the difficult application of the Charter in a case heard by the German Federal Labour Court, which concerned the issue of whether material from a video surveillance system that incriminated a worker could be presented by their employer in legal proceedings, or whether such presentation breached Article 17 GDPR. 47 When considering the Article 17(3) exceptions, the court balanced the employer's fundamental right to defend themselves before a court of law under Article 47(2) against Articles 7 and 8 CFREU, thus holding that the submission of the material would only be ‘unreasonable (disproportionate in the strict sense) if the monitoring measure were found to be a serious breach of Article 7 and Article 8 of the Charter’. Although the case was complicated by the interplay between the different Charter rights, the ruling nonetheless suggests that a ‘serious breach’ is required. It also demonstrates that Articles 7 and 8 CFREU often require consideration of competing social, economic or judicial interests that can impact their application. Although it was not invoked in the case itself, Mangan draws attention to Article 16 CFREU, which provides employers with the freedom to conduct a business. 48 This right may cause further restrictions on the application of Articles 7 and 8 CFREU in the employment context. Complications may also arise where required data or privacy protection generates financial burdens, such as compliance and administrative costs.
Evidently, none of the fundamental rights of the Charter are absolute rights by virtue of Article 52(1) CFREU, which imposes a general limitation where the restriction complies with the principles of proportionality and necessity, and genuinely meets the objectives of legitimate interests recognised in EU law. 49 Article 52(1) highlights the operational distinctions between the CJEU and the ECtHR. While the ECtHR concentrates on singular human rights issues, the CJEU operates within a broader realm of EU law, and thus integrates issues of economic and political harmonisation as well the functioning of the internal market into its legal enquiry. As Douglas-Scott observes, the need to integrate these overarching considerations into legal analysis may force a more nuanced interpretation of individual rights compared to the more expansive approach that might be taken if the rights were considered in isolation from the broader legal context. 50 This does not mean that the Charter is of no effect. The CJEU still draws heavily on fundamental rights when applying secondary laws. But the overall effect may resemble more of a compromise between the fundamental rights of individuals and the general demands of European integration. 51
Article 8 of the European Convention of Human Rights
Article 52(3) CFREU provides that the fundamental rights of the Charter correspond with the rights guaranteed by the Convention. 52 Article 7 CFREU lies in direct correspondence to Article 8 ECHR. Although the Convention does not formalise a stand-alone right to data protection, the official explanatory text of the Charter notes that Article 8 CFREU likewise acts in correspondence to Article 8 ECHR. Article 53 CFREU further stipulates that nothing in the Charter can be interpreted as restricting, or otherwise adversely affecting, Convention rights. Article 8 ECHR can therefore either inform the application of data and privacy rights in the GDPR, or otherwise be pursued as an independent ground.
Interferences with the right to private life
Article 8 provides a ‘right to respect for private and family life, home and correspondence’, which applies in the context of employment. 53 The ECtHR has established that this includes ‘the right to a form of informational self-determination, allowing individuals to rely on their right to privacy as regards data which, albeit neutral, are collected, processed and disseminated collectively and in such a form or manner that their article 8 rights may be engaged’. 54 The right to a form of informational self-determination recognises that privacy is an umbrella term that encompasses a range of sub-rights. 55 These include a right to be left alone, to disconnect, or to form social identities and relations outside of work. 56 However, central to these formulations is the presumption that individuals can control their data. This is fundamentally at odds with the realities of how AI surveillance operates. The right to a form of informational self-determination is therefore not only a welcome way of addressing this lacuna, but an emerging concept that acknowledges the general importance of human autonomy and dignity when assessing privacy interferences.
However, Article 8 may not capture all types of data protection issues, since privacy and data protection issues remain, and in spite of their considerable overlap, formally distinct. 57 Borgesius and Bekkum to this extent suggest that ‘the right to the protection of personal data applies to a phone book, because that includes personal data. But being listed in a phone book does not necessarily interfere with the right to a private life’. 58 Yet, it is important to remember that the Convention is a ‘living instrument’, and thus should be interpreted in accordance with contemporary social standard. 59 The Convention, drafted in the 1950s, originated at a time when data protection issues had not yet evolved. With the advancement of modern technologies, the ECtHR has progressively expanded its interpretation of rights relating to data protection, adapting to the emerging challenges posed by technological developments in its case law. The ECtHR has to this extent declared that it will interpret Article 8 broadly where personal data is processed, which makes it foreseeable that the right applies to most cases involving serious data protection issues. 60
The successful adaptation of Article 8 to the evolving legal challenges of data protection issues within ECtHR case law evidences its capacity to keep up with the rapid technological advancements of AI surveillance, which have so far outpaced the GDPR. This can be seen in the application of Article 8 in SyRi, where the Dutch Court identified three factors as contributing to an interference with the complainant's rights. The first concerned the excessiveness of data used by the algorithm. 61 The second concerned the opacity of the algorithm. 62 The third concerned the impact of the algorithm on the data subject. 63 One might wonder why the complainants in SyRi did not bring the legal challenge directly under the protections of the GDPR rather than as a complaint under Article 8 ECHR. 64 The reason for this becomes clear upon further inspection. Article 8 avoids the irreconcilable conflicts that emerge in the analytical dimensions of the GDPR. Rather than fixating on the mechanical issues of data processing, the reinterpretation of the same matter in the language of privacy rights evades the inherent tensions of the GDPR data principles, since it is not necessary to consider the trade-off between the size and anonymity of training data and predictive accuracy, but the overall effects of AI surveillance on the rights and interests of workers.
After all, these technical implications are inherently infused with human rights implications. As UN Special Rapporteur Alston has observed, with ‘the absence of transparency about the existence and workings of automated systems, the rights to contest an adverse decision and to seek a meaningful remedy are illusory.’ 65 Article 8 addresses this challenge by focusing its legal assessment on the extent and seriousness of the alleged interference. This avoids the need to go down a technical rabbit hole in trying to decipher the innerworkings of the AI surveillance system, where algorithmic opacity can become a de facto defence by inhibiting sufficient evidential proof of any detriment suffered by complainants. Instead, the application of Convention rights offers potential strategic advantages since non-transparent or unexplainable AI systems are considered prejudicial to the respondent. In SyRi, the Dutch Court circumvented thorny evidential issues, arising from the lack of transparency in the algorithm, by interpreting this against the respondent when evaluating whether there was an interference of Article 8. 66 This aligns with logic applied in the indirect discrimination case of Filcams v Deliveroo Italia SRL, where the Bologna Labour Tribunal held in favour of the complainants because the respondent refused to disclose how the disputed algorithm operated. 67
Legitimate aim, necessity and proportionality
Article 8(2) additionally requires consideration of potential justifications for AI surveillance, namely, whether it has a legitimate aim and is necessary and proportionate to the achievement of its aim. In Glukhin v Russia, the ECtHR found the use of live facial recognition technology in collecting mass pools of biometric data of citizens participating in protests to amount to a ‘highly intrusive’ technology that could have a ‘chilling effect’ on lawful protests.
68
In finding the technology incompatible with Article 8, as well as Article 10, the court explained that: ‘A high level of justification is therefore required in order for them to be considered “necessary in a democratic society”, with the highest level of justification required for the use of live facial recognition technology.’
69
Article 8(2) also requires any use of AI surveillance to be necessary and proportionate to the aim it seeks to achieve. At this stage, the amount, as well as type, of data collected, and the inferences made thereof, are critical. In many cases, worker surveillance can be operationalised through less invasive measures that do not require the use of invasive AI systems, such as through human oversight, periodic reviews, or by self-reporting requirements. As the legal strategy in SyRi confirms, the proportionality and necessity stage warrants consideration of the GDPR as a benchmark for informing the application of Article 8(2). Again, it is important to emphasise that this will inform the minimum standards of technical compliance that are needed, since Article 8 can impose a higher threshold of requirements for data processing where such implicates privacy. Hendrickx and Van Bever therefore use the term ‘cross-referencing’ to demonstrate the fact that Article 8 not only serves as an interpretative guide for other sources of EU law but is also shaped by these sources. 74 The latter can, in particular, draw in the new Artificial Intelligence Act into the remit of privacy protections. 75
The proportionality test will also consider the impact of the AI surveillance on the general rights and interests of workers, which requires close scrutiny of the inherent risks of AI systems, particularly those involving algorithmic bias, discrimination and stereotyping, in the legal assessment of Article 8(2). By comparison, this requirement is more functional than the Article 5(1) GDPR data principles; it is not so much the operation of the system that is at fault, but the implication of it for individual rights and freedoms. The profound impact of AI technologies on workers, not just in terms of their rights but also in terms of their general wellbeing, will consequently attract closer scrutiny than traditional measures of surveillance. Indeed, the ECtHR has consistently related Article 8 to the protection of human dignity and autonomy in different contexts, which cannot be achieved in a context where the value of workers, as human beings, is conceptually reduced by highly invasive computational processes, into mere data points. 76
Conclusion
With AI surveillance assuming standard practice, the reconstitution of workers as a data-driven commodity reflects the broader ideological shift from an economy that appreciates the intrinsic value of work to one that prioritises its statistical utility. Within this paradigm, the boundaries between technological innovation and human individuality become increasingly blurred, challenging traditional understandings of privacy. Consequently, informational control becomes a key safeguard to privacy, which workers lack where AI surveillance is used. EU law recognises various aspects of informational control within the tapestry of its data and privacy protections. However, the effectiveness of these protections varies. This article has shown that the safeguarding of informational control cannot be achieved by the technical provisions of the GDPR alone. Consideration must also be given to the broader framework of rights in Articles 7 and 8 CFREU, as well Article 8 ECHR, for two reasons: first, they address the underlying causes and not just the effects of data-driven discrimination by responding to the impact of AI surveillance on the rights of the individual workers; and second, they recognise that privacy is a concept that must be dynamically pursued. The latter, in particular, is paramount, since the computational capacities of AI surveillance are rapidly advancing and thus becoming progressively more invasive. This highlights the importance of developing an integrative approach to privacy that is innovative and adaptive to these technological progressions. The recognition of the right to a form of informational self-determination by the ECtHR is a critical step forward that has the potential to influence the CJEU's future interpretations and applications of the GDPR, as well as to empower workers with the ability to control their data, ensuring that they can navigate the realities of modern work without surrendering their rights and interests.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
