Abstract
This article examines the role of European data protection law, specifically the General Data Protection Regulation (GDPR), in enabling ‘welfare (state) dystopias’. Numerous high-profile cases across Europe and beyond, including the Australian Robodebt scandal, as well as the Dutch SyRI and Childcare Benefit Scandal cases, exemplify how digitalised welfare initiatives can lead to rights violations. The article argues that the GDPR's deferential stance toward national legislation has turned a blind eye to high-risk data processing activities in a social security context, creating a legal framework that leaves affected citizens without sufficient protection. Focusing on three GDPR principles – lawfulness, data subject rights and purpose limitation – the article identifies how gaps in these protections have facilitated data-driven injustices. To counteract these deficiencies, the article proposes four reforms, all of which take existing norms of the GDPR as their point of departure. The proposals include: (a) stronger safeguards against automated decision-making; (b) utilising the proportionality requirement found in the doctrine of lawfulness of the processing of personal data; (c) setting clear limits to deviations from the purpose limitation principle; and (d) substantiating the criterion of the essence of data subject rights. All four recommendations constitute specific proposals for a recalibration of how the GDPR is applied to data processing in the domain of social security, thereby allowing for stronger protection of the fundamental rights of beneficiaries.
Introduction
Keeping in step with the rest of society, the administration of social security has seen many ‘innovations’. Some of these have greatly increased efficiency; others have violated the core principles of the fundamental rights of those dependent on it. In recent years, there have been so many cases of violation of the human rights of social security recipients, precisely at the hands of such ‘innovative’ automation efforts, that an investigation into these ‘welfare dystopias’ is called for. For those still on the fence, one could point to the lives ruined by innovative solutions such as MiDaS in the USA, Robodebt in Australia, the DWP and Post Office scandals in the UK, charges of algorithmic discrimination against the French Caisses d’allocations familiales, secretive Spanish artificial intelligence tools hoping to hunt down benefit fraudsters, automation horrors in Serbia, and the SyRI and Childcare Benefit scandals in the Netherlands (Eubanks, 2018; The Guardian, 2019, 2024; Amnesty International, 2023; Lighthouse Reports, 2023; Commonwealth of Australia, 2023; Quadrature du Net, 2024; Parlementaire Ondervragingscommissie Kinderopvangtoeslag, 2020; Parlementaire Enquêtecommissie Fraudebeleid en Dienstverlening, 2024). With over a dozen such scandals in the last five years, it is clear that something is rotten in the state of data-driven social security.
One potential source of these injustices has, however, evaded proper scrutiny: the role that data protection law has played in setting the scene for some of these dystopias. This article investigates the notion that the EU's data protection framework, specifically the way it plays out in the field of social security, can be considered an accomplice to the rise of data-driven bureaucratic injustice. This contribution argues that the EU's General Data Protection Regulation, rather than offering sound protection against high-risk data processing activities, has facilitated welfare dystopias through the deferential nature of many of its core provisions. 1 This contribution puts forward the argument that the GDPR, in the specific domain of social security, fails to offer legal protection against invasive or high-risk data processing activities, and it explores the options that the current framework offers for effective remedies.
Previous scholarship and addressing the gap in the literature
Recent scholarship has identified numerous factors that contribute to the dark side of what has been termed the ‘digital welfare state’ (Jørgensen, 2023; Maxwell, 2021; Rachovitsa and Johann, 2022; Van Toorn et al., 2024; Van Zoonen, 2020). Several works highlight the ways in which digitalised systems of social security administration have exacerbated the very conditions that they were intended to mitigate (Alston, 2019; Eubanks, 2018; Griffiths, 2024; Larsson, 2021; Rachovitsa and Johann, 2022; Van Toorn et al., 2024; Van Zoonen, 2020). Restricting the focus to legal scholarship, it has been argued that these dystopian situations are the result of an underestimation and underutilisation of fundamental rights, such as social and economic rights and established constitutional norms (Vonk, 2023a, 2023b, 2024; Wisman, 2024; Yeung and Harkens, 2023). Other analyses have highlighted the dangers posed by the rigidity of digitalised bureaucracies, specifically for society's most vulnerable subjects (Malgieri, 2023; Ranchordás, 2022, 2024a, 2024b; Ranchordás and Scarcella, 2021; Wisman, 2024). Additionally, the risks associated with the lifecycle of algorithms have been examined, as well as the lack of checks and balances in digitalised government services (Bouwmeester, 2023; Haitsma, 2023; Haitsma and Bouwmeester, 2023).
When considering the legal frameworks that have enabled the rise of digital welfare dystopias, one candidate stands out for its role and lack of scrutiny: data protection law. The manner in which the personal data of social security recipients have been processed has been a central feature of each of the digital welfare state scandals and, therefore, of their dystopian outcomes. The exact interplay between the legal fields of data protection and social security that made this possible has, however, received little scholarly attention. The few publications that did consider this nexus have mainly engaged either with the more abstract notion of data's ‘invisible power’, or with comparative studies of national adaptations made due to GDPR requirements – the main arguments have concerned the relative strengths and weaknesses of various national and sectoral legislative responses to data protection law requirements (Choroszewicz and Mäihäniemi, 2020; Gantchev, 2019; Wisman, 2024).
The preliminary conclusions from this research do, however, suggest a common thread: a warning that the European Union's data protection law might not offer adequate protection in the field of social security (Choroszewicz and Mäihäniemi, 2020; Enqvist, 2024; Gantchev, 2019; Wisman, 2024). Scholars have proposed enacting national legislation that is much more explicit about the limitations of data protection rights for welfare recipients and have urged judges to assign more weight to the vulnerability of such recipients when these are parties to court proceedings. The most commonly shared conclusion is a call to formulate more actionable and enforceable safeguards, plugging the holes in the legal safeguards set out in data protection law in the specific domain of social security (Choroszewicz and Mäihäniemi, 2020; Enqvist, 2024; Gantchev, 2019).
It is precisely on this point that this contribution attempts to carry the discussion forward. It takes a closer look at the crossroads of data protection law and social security law, supplementing the previous comparative analyses with an in-depth analysis of GDPR provisions and the specific shape that these provisions take in the domain of social security. It investigates how a reliance on the legal framework of EU data protection law to protect individuals against data harms in social security has made the emergence of welfare dystopias possible, and how its shortcomings can be addressed within this framework.
The remainder of this contribution seeks to examine these two questions as follows. First, three core criteria of the GDPR are analysed, focusing on the way that they play out in the specific area of social security. These doctrines are the following: the lawfulness of the processing of personal data; the rights of data subjects; and the principle of purpose limitation. For each of these, a short description of the legal framework is given, followed by an analysis of how the deferential attitude of the GDPR towards Member States has facilitated the rise of welfare dystopias. Having identified the limitations of these core provisions, we then explore four specific remedies, before providing a conclusion.
How European data protection law facilitates welfare dystopias
The legal framework of lawfulness and how it can facilitate welfare dystopias
Having briefly considered the many recent examples of welfare dystopias and some of the scholarly reflection thereon, it is now time to turn our attention to the legal framework of European data protection law. The focus of this third paragraph is on analysing the three criteria mentioned above and discerning the vulnerabilities in their provisions that have made the data-driven harms in the context of social security possible. First, we will focus on the criteria of lawfulness and its normative effect on data processing in the specific domain of social security. After a brief discussion of the relevant legal framework of this GDPR principle, we analyse its relative weakness in stopping the occurrence of welfare dystopias.
Data protection law requires a legitimate justification for any use of personal data. In the GDPR, this principle is reflected in the criterion of the lawfulness of the processing of personal data. 2 This criterion requires any use of personal data to be based on one of the six legal grounds listed exhaustively in Article 6 GDPR. If none of these six legal grounds allow for a particular use of a person's information, that use is unlawful. In the context of social security and the digital welfare state, three of these grounds are particularly relevant: firstly, consent of the person involved to the processing of their personal data; 3 secondly, if the use of the personal data is necessary for compliance with a legal obligation; 4 and thirdly, if the use of the personal data is necessary for the performance of a task carried out in the public interest, or in the exercise of official authority. 5
The current legal framework offers limited protection in social security contexts, for several reasons, the first of which is how difficult it is to obtain valid consent. 6 Consent under the GDPR must be specific, unambiguous, informed and freely given. 7 In social security settings, however, this standard is unlikely to be achieved, due to the inherent power imbalance between the public authority involved and the beneficiary. For consent to be valid, data subjects must have a genuine choice. If individuals feel compelled or anticipate negative consequences of refusing consent, their consent cannot be considered freely given. 8 In social security, beneficiaries often face situations where refusal could harm their access to essential support. This makes true consent practically unattainable, thereby rendering it an unsuitable legal basis for data processing in this domain. 9
It is therefore far more likely that any processing of personal data in a social security context relies on one of the other two legal grounds mentioned: using the data to meet a legal obligation or to execute a task in the public interest. The requirements that the legal framework of the GDPR makes in this regard are the aforementioned necessity, and that the basis for using either of these grounds is laid down in Union or Member State law – which could even be in case law. 10 These criteria for the use of these legal grounds are easily met in the context of social security, because the field is already highly regulated by Member State law. Any Member State that engages in the type of data processing operations that risk creating welfare dystopias will have plenty of laws regulating (e.g.) what payments are made to whom and for what reason. If, in particular, we consider that the GDPR requires merely ‘the basis for the processing’ to be laid down by law, we can see that even social security legislation that does not specifically address data protection can still function as such a basis. 11 The occurrence of digital welfare state dystopias is in no way stopped by the demand that a ‘basis’ for processing operations must be found in an applicable law.
Legislation on welfare fraud detection illustrates how this deferential attitude of the GDPR towards Member State legislation in the doctrine of lawfulness of processing can result in dystopic scenarios. 12 If we imagine legislation that obliges a party to do ‘whatever it takes’ in the fight against social security fraud, the legal framework of Article 6 GDPR as such offers very little protection: there is a legal obligation, extensive processing operations can be necessary to meet its demands, and its basis is laid down by law. 13
Dutch legislation on combatting social security fraud at times comes quite close to this approach. We can see the boundless ambition of the public authorities to use citizens’ personal data in this domain if we look at the central clause in the Dutch Act on data transfers in social security: ‘At the request of [the two relevant public authorities for the administration of social security], the Inspectorate, or the Minister, all persons must freely provide all data and any information that is necessary for the execution of this law, its secondary legislation, or any other law.’
14
Another interesting illustration is the Childcare Benefit Scandal (‘Toeslagenschandaal’). At some point in the scandal, the Dutch Data Protection Authority did issue a fine, specifically for unlawful use of personal data – but it had to tie itself in knots to do so. The social security payment at the core of the scandal is an income-dependent subsidy for parents with children in daycare. It is administered by the Dutch tax authorities through a system of advance payments, some of which can be claimed back if too much subsidy is paid over a certain period of time – if, for example, the parents’ income was higher than previously estimated. 15 The callous legislation had therefore provided the tax authorities with an obvious legal ground to process information: about eligibility, about the correctness of the subsidies that were received, about whether money should be claimed back, and when cases should be considered as fraud.
It is therefore enlightening to see the reasoning of the Data Protection Authority on fining the tax authorities in this specific case, as it had to address the notion of the lawfulness of the data processing. The DPA confirmed that the tax authorities clearly had a legal ground for processing data for fraud detection and for sanctioning. 16 The fine therefore focuses on the procedures followed by the tax authorities, specifically the use of risk-scoring algorithms. The DPA argued that it was the discriminatory choice of (having more than one) nationality as a risk factor that transgressed the boundaries of necessity, and that this constituted the unlawfulness of the processing. 17 The problem with this line of reasoning, rather than, say, voiding discriminatory data-processing operations because of their unconstitutionality, is that everything hinges on what Member State legislation stipulates as being necessary (Wisman, 2024). This means that the hypothetical ‘whatever it takes’ legislation could well include devastating processing activities within the bounds of what it considers necessary in order to comply with its obligations. In the context of social security, the GDPR principle of lawfulness therefore does little to prevent welfare dystopias, due to this dependency on Member State legislation to provide it with the necessary content.
The legal framework of data subject rights and how it can facilitate welfare dystopias
The second core GDPR criterion to consider when looking at the relationship between the GDPR framework and the occurrence of welfare dystopias, is the notion of ‘data subject rights’. These rights are listed in Chapter II of the GDPR and are granted to people in their capacity as data subjects. They include rights to information and transparency, 18 access to information about the use of personal data, 19 the right to object, 20 and a right not to be subject to automated decision-making. 21 This paragraph takes a closer look at the legal framework of some of these rights and analyses the strength of the safeguards that they provide against data-driven harms in social security. It is argued that although these articles sound promising, their application to the specific context of social security is actually less fortuitous. Again, we can discern a largely deferential attitude of the GDPR towards Member State legislation, stripping the regulation of much of its protective value.
A first example of this can be seen in the clauses on the right to information and to transparency. Articles 12 to 14 GDPR unambiguously list a lot of information that the individual must be informed about, by the party responsible for the use of the data. 22 Article 14 GDPR specifically states that an individual must be informed whenever an entity has obtained their personal data from another, or intermediary, party. In that situation, that entity has to inform the person concerned about, for example: the identity of the mediating organisation, what categories of data they have received, and the purposes and the legal basis for the processing. The final paragraph of this article, however, states that these obligations do not apply when Union or Member State law lays down rules governing the movement of, or secrecy about, data that is being transferred. The only requirement applicable to such a law (that allows the disapplication of the right to information) is that it provides appropriate measures to protect data subjects’ legitimate interests. 23 Social security is a forteriori a field containing many rules on the transferring of personal data, meaning that the data subject's right to information in Article 14 GDPR can largely be disapplied in this context. Examples such as SyRI and the Childcare Benefit Scandal in the Netherlands, and the allegations against the French CAF, showcase the lack of power of this provision when faced with public authorities that insist on the secrecy of their processing operations, for reasons that they consider legitimate for their own purposes. 24 An important question that remains in this regard, and which is especially pertinent throughout the field of social security, is who gets to decide which interests are legitimate and how they should be balanced.
A similar example of a large gap in the protection of individuals’ rights due to a deferential attitude towards Member State law relates to the right of the data subject not to be subject to automated decision-making. Article 22 GDPR grants the data subject the right not to be subject to a decision that is based solely on automated processing if this produces legal effects concerning them, or similarly significantly affects them. 25 The question of what would constitute the minimal interference in automated systems by humans in order to circumvent this right has given rise to many discussions, but has recently also been addressed by the European Court of Justice in its 2023 Schufa judgment. 26 The case revolved around the credit rating agency Schufa whose services included providing its clients – companies supplying credit – with risk-scores of citizens who applied for credit. The court held that the automated ways of establishing these scores already constitute automated individual decision-making, if a recipient of that score ‘draws strongly on that probability value’ to alter its contractual relation with the citizen. 27 This elaboration from the CJEU has markedly increased the level of protection offered by Article 22(1) GDPR.
When discussing whether the legal framework of data subject rights has enabled welfare dystopias, another exception to Article 22 GDPR must be addressed, which is much more pertinent to the specific types of data processing in the domain of social security. The text of Article 22 GDPR contains an exception to the right not to be subject to automated decision-making, whenever such decisions are authorised by Union or Member State law – so long as this law also lays down suitable measures to safeguard the data subjects’ rights and freedoms.
28
The question is, then, what are these ‘suitable measures’ which must be contained in the law that allows the automated decision-making? In an interesting turn of events, the CJEU made this notion much more onerous. The court stated that such measures must include not only obligations to use appropriate statistical modelling, which feels rather obvious, but also to: ‘implement technical and organisational measures appropriate to ensure that the risk of errors is minimised and inaccuracies are corrected, and secure personal data in a manner that takes account of the potential risks involved for the interests and rights of the data subject and prevent, inter alia, discriminatory effects on that person. Those measures include, moreover, at least the right for the data subject to obtain human intervention on the part of the controller, to express his or her point of view and to challenge the decision taken in his or her regard.’
29
The final example of the protection provided by the GDPR's data subject rights is article 23 GDPR, and it confirms the Swiss cheese nature of its protection, despite the Schufa ruling. It is entitled ‘Restrictions’, and allows Member States to limit any of the data subject rights in Articles 12 - 22 GDPR by means of a ‘legislative measure’, subject to a proportionality test. The measure must ‘respect the essence of the fundamental rights and freedoms’ and be ‘a necessary and proportionate measure in a democratic society’, and the legislative act must contain specific provisions on certain points. This proportionality test therefore guards any and all of the data subject rights in the GDPR. The result is that any such right – even the right not to be subject to automated decision-making, which was considerably strengthened by the Schufa case – will only ever be as strong as the Member States’ and the CJEU's conceptions of the proportionality test allow it to be. This has been interpreted both as an Achilles heel of the system protecting data subject rights (Damen, 2023), and as an opportunity par excellence to set truly substantive (constitutional) norms on what such an essence should look like (Wisman, 2024). For now, it is unclear what this ‘essence’ entails when applied to data processing activities in the field of social security.
This lack of clarity regarding the EU data protection norms and how far they really lay down rules for controllers engaging in data processing operations for social security purposes is especially worrying if we look at the track record of public authorities involved in welfare dystopias. The lack of transparency, specifically of the public authorities tasked with deciding on eligibility and reclaiming of social security payments, has been noted as one of the crucial elements in Australia's Robodebt scandal (Carney, 2018, 2019, 2020; Commonwealth of Australia, 2023). The SyRI case in the Netherlands revealed government agencies, engaging in data pooling and risk-scoring in the fight against social security fraud, which were unwilling to share the relevant information regarding their algorithms with even the courts to which they were summoned. 30 In perhaps the most infamous Dutch case of welfare dystopia, the Childcare Benefit Scandal, the executive branch refused to share information specifically requested by Parliament; not just concerning the technical details of the risk-scoring algorithms, but also the analyses performed by the lawyers within its own civil service (Besselink, 2021; Parlementaire Ondervragingscommissie Kinderopvangtoeslag, 2020; Zijlstra, 2021; Parlementaire Enquêtecommissie Fraudebeleid en Dienstverlening, 2024). 31
In France, the non-governmental organisation La Quadrature du Net has been involved in a struggle against the use of algorithmic risk-scoring by one of the nation's most important administrators of social security, the Caisse d’allocations familiales (CAF). It took them years to get hold of information about the inner workings of the risk-scoring algorithms that the CAF uses to sift through millions of cases in its investigation of social security fraud. Interestingly, when the CAF did eventually provide the source code of (legacy) risk-scoring algorithms, La Quadrature du Net argued that this scoring was discriminatory and shows ‘the dystopic aspect of a surveillance system allocating “suspicion scores” to more than 12 million people’(Quadrature du Net, 2024). 32 The CAF, faced with complaints to the French Data Protection Authority and investigations by one of France's défenseur.e.s des droits, 32 argues that its profiling practices are perfectly legal (Geiger et al., 2023; s.a. 20 Minutes, 2023).
The lack of actionable and enforceable rights for those affected by processing operations carried out by the public authorities has been at the core of many welfare dystopias. The current legal framework for the second of the three criteria analysed in this contribution, data subject rights, thus also does little to stop welfare dystopias.
The legal framework for the purpose limitation principle and how it can facilitate welfare dystopias
Having considered lawfulness and data subject rights, this paragraph focuses on the third core criterion of data protection law: the principle of purpose limitation. This principle sets limits to using personal data which have originally been gathered for purpose A, for purpose B. By setting boundaries on the re-use of personal information, it is one of the cornerstones of the fundamental rights to respect for private life and to data protection (Council of Europe, 1981; European Data Protection Board, 2013; Gutwirth et al., 2016; Wisman and Tijm, 2022). This criterion is also particularly relevant for data processing activities in the field of social security, where information is frequently transferred between different domains, such as employment, taxation and local governments. Understanding the legal framework of the GDPR principle of purpose limitation in this field is therefore vital to grasping its role in the emergence of welfare dystopias.
A small library could be filled with analyses of the complicated, multi-layered legal framework of this core principle of data protection law (Brouwer, 2011; Damen, 2021; Damen and Prins, 2024; Drechsler and Vogiatzoglou, 2023; Koning, 2020; Von Grafenstein, 2018; Wisman and Tijm, 2022). Very briefly summarised, the framework operates on four levels. The first states the general principles of the rule, such as that data should not be used for different purposes, as stated above. 33 The second layer contains the criteria to be used to ascertain the level of compatibility of these different purposes. 34 The third and fourth levels contain exceptions for processing operations that cannot pass the compatibility test in the second layer: either there is valid consent of the data subject, or the use of data is mandated by law and can pass a proportionality test. 35 This final option allows the use of personal data, even when this use violates the general principle, doesn't pass the compatibility test, and there being no consent. The condition that has to be met for this final exception to apply, is that the use of the personal data is based on a European Union or Member State law that ‘constitutes a necessary and proportionate measure in a democratic society’ (Damen and Prins, 2024; Eskens, 2022). 36
It is this final proportionality test that we now focus on, because it contains the key to understanding the legal safeguards offered by the GDPR in the field of data-driven social security practices. It is here that we can see the normative effect of the GDPR on the room for manoeuvre for using personal data for completely different purposes than those for which they were originally gathered.
In March 2023, the CJEU handed down a judgment on this matter and elaborated slightly upon the proportionality test in Article 6(4) GDPR. 37 The claimant in this civil case challenged a contractor’s invoice and demanded to see the register of the construction workers’ working hours that the contractor was legally obliged to keep, in accordance with tax laws. The defendant refused, citing the workers’ right to privacy and to data protection. Because the workers’ personal data had been collected for tax law compliance purposes, but were now being requested in a discovery request based on the civil procedural code, the Swedish supreme court referred this question about the principle of purpose limitation to the CJEU. Specifically applied to this case, the question arose if, and to what extent, the Swedish civil procedural right to procure evidence from various sources could be said to be proportionate in light of Article 6(4) GDPR.
In its judgment, the court confirmed that the GDPR applies to the case and that a test of the proportionality of a deviation from the principle of purpose limitation was therefore called for, in accordance with Article 6(4) GDPR. Before referring the actual balancing exercise back to the Swedish supreme court, the court did elaborate ever so slightly on the proportionality test, by adding the following three considerations. It stated that the national court, when performing the balancing exercise, must:
determine whether or not the disclosure of the data is ‘adequate and relevant for the purpose of attaining the objective pursued by the applicable provisions of national law’; determine whether the objective cannot be achieved by less intrusive means; and at all times take into account the principle of data minimisation.
38
Similarly to the legal frameworks for lawfulness and data subject rights, the rules pertaining to purpose limitation again show a large degree of deference to Member State legislation. When faced with the question of proportionality in light of Article 6(4) GDPR, the focal point remains the Member State legislation, and the self-declared objectives that it seeks to attain. This leads to perfectly reasonable outcomes in the majority of cases, as the Norra Stockholm Bygg case illustrates in the case of civil procedural law. The reason why such deference in the criteria of the GDPR and the judgments of the CJEU risk creating welfare dystopias, is that national social security legislation is of a completely different order. Many Member State lawmakers have shown that their conceptions of the proportionality of surveillance, of violations of private life, and of data protection rights are very different whenever it is beneficiaries of social security whose rights are in the balance. One of the main objectives that national laws seems to be absolute rather than proportionate: to make sure that not a single potential fraud case slips through unnoticed.
We can take the aforementioned central clause from the Dutch Act on data transfers in social security as an illustration of a mindset that would be unthinkable in many other areas of law, stating as it does that ‘everyone must provide all information necessary for the execution of any law’.
39
Similar examples of boundless ambition can be seen in the UK's Social Security Administration Act and Fraud Act, where the categories of persons that can be targeted by the law are ‘restricted’ to: ‘any person who is or has been an employer or employee (…); any person who is or has been a self-employed earner (…); any person who by virtue of any provision made by or under that Act falls, or has fallen, to be treated for the purposes of any such provision as a person within paragraph (a) or (b) above.’
40
How to improve the level of protection: four proposals to strengthen the GDPR safeguards
Introduction
The previous paragraph demonstrated how three of the core doctrines of European data protection law fall short when applied to the specific context of social security. More robust safeguards are therefore required if we are to prevent welfare dystopias from occurring in the future. The question is: which safeguards, and why those? There are many arguments to be made about norms that have been violated in particular cases, whether of administrative law, constitutional law, or of a more political-philosophical nature (Brinkman and Vonk, 2022; Carney, 2018, 2019; Haitsma and Bouwmeester, 2023; Ranchordas and Scarcella, 2021; Vonk, 2023a, 2023b; Yeung and Harkens, 2023). This contribution seeks to establish a connection between some of the abstract norms of data protection law and concrete, actionable advice on ways to remedy the situation.
In the spirit of the French expression “reculer pour mieux sauter”, the proposed safeguards are linked to core constitutional and political principles that have been violated in the various cases of welfare dystopia. These core principles of modern liberal democracy are: a limit to the powers of the government vis-à-vis its citizenry possessing fundamental rights; transparent exercise of these (limited) powers; and a robust system of checks and balances. These abstract norms are not the core focus of this contribution, but serve as a lens through which to look at potential improvements to the specific legal framework in place. In this section, four improvements to the existing legal framework of European data protection law are explored, as well as their potential to strengthen the safeguards for data subjects in social security. 41
Safeguard I: Automated decision-making and Article 22 GDPR
The first such proposal focuses on strengthening the rights of the data subject. In light of the two points discussed earlier, a first specific improvement can be made regarding the right to transparency of social security beneficiaries. Currently, Article 14 GDPR grants the authorities a very generous exemption to the duty to provide information, so long as the legitimate interests of the data subjects are safeguarded. Notwithstanding the interest of public authorities in having some level of secrecy in activities such as fraud detection, the clause on the legitimate interests of the data subjects should be interpreted as including a right to information about the investigative powers of these authorities, the various data sources from which they gather data, and basic information about the procedures they follow as soon as they consider behaviour ‘suspicious’. This information should be provided at the very least after an investigation is closed and before (final) decisions are taken.
A second avenue to protect data subjects against welfare dystopias, is to integrate the Schufa judgment on the right not to be subject to automated decision-making into the domain of social security – and to properly enforce its application. Based on our knowledge of the cases of welfare dystopia, a first key avenue for improved protection of rights is to establish an inventory of the situations in which decision-making in social security ‘draws strongly’ on (automated) profiling practices. The second key avenue pertains to the notion of ‘suitable measures to safeguard the rights and freedoms of the data subject’. The right not to be subject to automated decision-making, or, alternatively, to at least be granted these suitable measures, can be strengthened simply by applying the Schufa judgment to the data processing activities that are prevalent in the domain of social security. Further examples of suitable measures, in addition to the examples mentioned in the Schufa ruling, are regular audits of algorithms and their embedding in organisational processes, their impact on forms of discrimination and improving access to human intervention (Eubanks, 2018; Haitsma, 2023; O’Neil, 2016; Wisman, 2024).
An example of a suitable measure that takes into account the specific situation of social security recipients, is taking into account the relative impact of an automated decision. If we imagine a situation in which a public authority takes the automated decision to reclaim €2500, it matters hugely whether this decision concerns a person's income tax on a six-figure salary, or their universal credit allowing them to live at subsistence level. This difference in the impact of a decision can be codified into the decision-making procedures. The low-hanging fruit in this regard is to calculate the potential financial impact of an automated decision, not in absolute terms, but relative to, for example, a person’s disposable income and the cost of living. The system of automated decision-making can filter out decisions that meet the threshold set in this regard and move that case to human decision-making procedures. Failure to provide such a safety net, especially in areas where particularly impactful decisions are taken, ought to be considered a violation of the need to provide suitable measures to safeguard social security recipients’ rights and interests, in the sense of Article 22(2) (b) GDPR.
Safeguard II: The proportionality clause for lawfulness in Article 6(3) GDPR
The second tool in the legal toolkit, which can be used to fine-tune the balance between the exercise of public power and the rights of the individual subject to this power in a social security context, is the criterion of proportionality. This tried and tested principle aims to safeguard the appropriateness and necessity of the exercise of power and to safeguard against excessive government interference in relation to the policy goals it wishes to achieve (Alexy, 2002; Barak, 2012; Huscroft et al., 2014). The occurrence of welfare dystopias illustrates the need to rethink the balance between (‘invisible’) administrative powers and individual rights in the specific field of social security (Wisman, 2024). The legal framework of data protection law, despite its many exemptions, does offer concrete instruments for addressing this balance, since it contains several proportionality provisions that could be operationalised for this purpose. This paragraph takes a closer look at the first of three such proportionality clauses in the GDPR, that will be discussed in this contribution, and explores a first way for them to serve as improved safeguards against the further spread of welfare dystopias.
The principle of data processing lawfulness is the first legal instrument to consider here. It contains a provision that could function as a safeguard for curbing potential excesses in the processing of social security data: the proportionality clause of Article 6(3) GDPR. This clause states that the law that serves as the basis for appealing to the legal grounds of public interest or the legal obligation, must ‘meet an objective of public interest and be proportionate to the legitimate aim pursued’. It therefore emphasises that, while an aim may be legitimate, legislators remain bound by the principle of proportionality. This provides an avenue for keeping for keeping excessive Member State provisions in check. Many of the cases of welfare dystopia have revolved around (automated) fraud investigations. Here, the notion of proportional legislation could draw on principles from criminal justice, where strict procedural rules limit investigatory powers – despite there being no doubts about the legitimacy of its goals. 42 The proportionality clause of Article 6(3) GDPR allows for the creation of a legal framework which clarifies the link between investigatory powers and the origins and severity of fraud suspicions. It can be applied as a tool to require differentiation in the scope and intensity of data processing, depending on levels of suspicion and severity, by considering blanket authorisations as disproportionate.
Safeguard III: The proportionality clause linked to the purpose limitation principle of Article 6(4) GDPR
The second proportionality clause that could be a potential safeguard against data-driven harms in social security is embedded within the legal framework of the principle of purpose limitation. As mentioned earlier, article 6(4) GDPR permits data processing for purposes that differ from those for which the data was originally collected, provided that the processing is authorised by a Union or Member State law that constitutes a necessary and proportionate measure within a democratic society. We have previously discussed the current limitations of this provision, given the often severe nature of legislation governing data processing in social security. This paragraph explores ways to transform these limitations into a strength.
One solution would be to have more measured and temperate Member State legislation. Given the centrality of Member State law in both the criteria of the GDPR itself and the CJEU's case law on proportionality, limiting the scope for data processing operations through an act of parliament would of course be the golden option. Given the current political climate in Europe, legislative changes that voluntarily limit powers of the executive branch in favour of the rights of socio-economically weaker parties in society seem improbable for the foreseeable future. Looking to European and national courts, therefore, offers a more likely path toward strengthening data protection law safeguards in social security.
Court reviews of legislation facilitating data processing in social security contexts are commonly sought. Article 6(4) GDPR's proportionality criterion offers a structured basis for limiting processing activities that diverge significantly from the central function of social security administration. In this regard, it is essential to recall that the law's intent is to limit the use of data for purposes that are too far removed from the origins of those data. The proportionality clause applies only as an exception to this core rule. In order to do justice to this legal architecture, there has to be a limit to the deviations from the principle that the clause can allow. Establishing clear boundaries can help fine-tune the question of where to draw this line.
One such limit, in light of the (dis)proportionality of deviating from the purpose limitation principle, could be banning transfers of personal data from public authorities to private entities when these primarily serve business interests (Damen, 2021; Lynskey, 2015; Tzanou, 2017). A second limit could be set on information requests made to medical professionals, including occupational health doctors and social security insurance physicians, in cases of work incapacity. Drawing a line in the sand is the clearest and most effective way of clarifying the application of the purpose limitation principle in the domain of social security and of increasing the protection it can offer against welfare dystopias.
Building on this, another closely related approach would be to restrict certain data processing activities, such as large-scale (‘big data’ & ‘A.I.') processing of data, covert data pooling, and requests for medical records, to criminal justice authorities alone. When investigatory teams trawl through the information of thousands of people at a time, or when access to people's medical information is sought for fraud investigations, operations have left the realm of administrative activities and have entered the fields of criminal justice and intelligence. 43 Not only does this ensure an improved understanding of the proportionality of deviating from the purpose limitation principle, it also ensures that high-risk data processing activities only take place within established frameworks of legal safeguards and, where necessary, under the supervision of an examining magistrate – rendering ‘invisible’ power (partly) visible and accountable (Wisman, 2024). 44
Safeguard IV: The proportionality clause of data subject rights in Article 23 GDPR
The final GDPR clause to be discussed in this contribution ties in with the first of the four proposed safeguards, because again it can serve to strengthen the rights of recipients of social security in their capacity as data subjects. It concerns the very same Article 23 GDPR mentioned in the earlier paragraph which considered how the legal framework for data subject rights could facilitate welfare dystopias. Where that paragraph focused on the weaknesses of its current form, this paragraph explores the ways in which this proportionality clause can be used to provide better protection against high-risk data processing operations in the field of social security. As discussed, the clause allows Member States to disapply any of the data subject rights of the GDPR, so long as the ‘essence of fundamental rights and freedoms is respected’. This leads to the question of what such an essence looks like; for example the essence of the rights to information and to transparency of the exercise of public power in a social security context.
In democratic societies, secrecy in public administration is typically restricted to specific situations and subject to independent judicial oversight. Yet, the emergence of ‘welfare dystopias’ highlights an alarming trend, with certain social security authorities now engaging in levels of secrecy comparable to criminal justice or intelligence agencies, but often without corresponding procedural safeguards (Besselink, 2021; Commonwealth of Australia, 2023; Damen, 2023; Haitsma and Bouwmeester, 2023; Parlementaire Ondervragingscommissie Kinderopvangtoeslag, 2020; Wisman, 2024; Yeung and Harkens, 2023). To address this imbalance, a concrete improvement could be to interpret the existence of transparency and accountability mechanisms in cases of secretive processing operations as essential to the fundamental rights of beneficiaries of social security. This is especially critical in high-risk, large-scale data processing, such as automated welfare fraud detection (Damen, 2023). In this context, again, respecting such an essence can be taken to mean that citizens are not to be subject to invasive surveillance – whether through data pooling or other monitoring techniques – without the oversight of an examining magistrate or investigative judge.
Conclusion
This contribution has taken a closer look at the legal framework of EU data protection law in the specific context of social security, in an attempt to better understand the ways in which this framework has allowed the emergence of welfare dystopias. A key take-away is that the GDPR, in its current form, does not offer adequate protection against the harms and injustices that are occurring in the specific context of data-driven social security administration. This is due to a combination of the EU's competences in this specific domain, the resulting deferential attitude of the GDPR towards Member State legislation, and the domain's particularly callous national laws. The result is an EU legal framework that offers a narrative on the protection of fundamental rights, but that fails to address the data processing risks and harms that occur in this specific context. Additional substantiations of the application of these European norms in the (national and sectoral) context of social security are therefore necessary in order to curb the occurrence of (data-driven) welfare dystopias.
This contribution has formulated four specific proposals to clarify, substantiate and improve the legal safeguards that European data protection law can provide against data-driven social security dystopias. The first of these proposals focused on the GDPR criteria of ‘suitable measures’ when disapplying the right not to be subject to automated decision making of Article 22(2) GDPR. The remaining proposals have explored the option of stricter interpretations of three proportionality clauses already existing in the GDPR: Article 6(3) GDPR concerning the lawfulness of the processing; Article 6(4) GDPR on circumventing the principle of purpose limitation; and Article 23 GDPR, requiring that the essence of fundamental rights remain intact when disapplying data subject rights.
Data-driven injustices in the domain of social security are neither merely anecdotal, nor are they inevitable. They are the result of the submission of the most vulnerable and least vocal members of society to experimental digitalised and automated bureaucracy, without proper guardrails in the form of adequate protection of their rights. If we are to stop welfare (state) dystopias from reoccurring, scholars of social security must keep working together to further research, understand, and address broader societal attitudes and structures that have facilitated them - whether legal, cultural, or technical.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship and/or publication of this article: this work was supported by a research grant provided by the Instituut Gak.
