Abstract
This article addresses the informational control powers of the state to detect social security fraud as one of the pillars supporting welfare conditionality in Western European states. It sheds light on the question of whether the repressive trend of vastly expanding conditions and sanctions attached to welfare benefits can also be observed in an unwarranted expansion of the adopted control powers of the government. The article begins by highlighting the importance of data protection law in the field of social security. It then provides a normative yardstick for assessing nationally the control powers by analysing the normative criteria set by the EU data protection framework, more specifically with regard to the purpose limitation principle and the transparency rights of individuals. Three case studies are carried out on Germany, the United Kingdom and the Netherlands which investigate the conformity of the control powers of the welfare administration with the basic right to data protection. The article concludes by providing explanations for the diverging level of protection in the examined countries and by recommending strategies for improving the data protection position of welfare beneficiaries.
Introduction
A chief characteristic of the age of welfare conditionality is the tight link between welfare rights and claimant duties. This characteristic is, however, by no means a new phenomenon; claimant duties have always been part of social security systems in Europe. The right to social assistance was conceptually designed by the architects of the modern welfare state to be a conditional right which depends on the recipients
From a legal perspective, welfare conditionality employs three techniques: 8 (intensified) claimant obligations, (stricter) sanctions for non-compliance and (wider) control powers of the government to monitor and investigate sanctionable behaviour. While the first two elements are enjoying growing academic attention, the research landscape pertaining to the control powers of the government has remained largely scarce. The aim of this article is to examine whether the growing repressive trend which can be observed in the areas of claimant obligations and welfare sanctions is also reflected in an unwarranted expansion of the control powers of the government.
More specifically, the article critically addresses the powers of the public administration to link and analyse the personal data of welfare beneficiaries for the prevention and detection of welfare fraud. Section 2 sheds light on the importance of automated data processing for the public administration and highlights the role of data protection in safeguarding the basic rights of citizens who are dependent on the provision of welfare benefits. Section 3 provides a normative yardstick by exploring some of the requirements which the EU data protection framework prescribes to data processing in the public sector for the purposes of welfare fraud prevention. Finally, Section 4 carries out three country-specific case studies on Germany, the United Kingdom and the Netherlands. The case studies evaluate the extent to which the adopted national control powers are in accordance with the basic right to data protection. The article concludes by elaborating on the differences between the level of protection in the examined countries and recommends strategies for improving the data protection position of welfare beneficiaries.
An important characteristic of the age of welfare conditionality is the tight link between welfare rights and claimant duties. This characteristic is, however, by no means a new phenomenon; claimant duties have always been part of social security systems in Europe. The right to social assistance was conceptually designed by the architects of the modern welfare state to be a conditional right which depends on the recipients’ willingness, and ability, to take up paid employment. 9 The age of welfare conditionality is, however, more clearly characterised by the profound shift that has been taking place in the balance between welfare rights and responsibilities in favour of the latter. This shift reflects a new view on the role of the welfare state which found its way into Western Europe in the last decade of last millennium. Inspired by the Scandinavian experience, 10 the social democratic parties of the 1990’s embraced Giddens’ ‘Third Way’ 11 strategy to welfare provision with the aim of ‘transforming the safety net of entitlements into a springboard to personal responsibility’. 12 In recent years, this process, which was initiated as creeping conditionality, 13 has reached new heights and is now being criticised by academics for its unintended effects in moral global terms in ushering in the rise of the repressive welfare state, with repressive trends subject to the spiral of formulating increasingly stricter obligations and tougher sanctions. 14 Overall, these developments challenge the post-war idea of social citizenship which is fundamental to the conception of social security as a right. 15
From a legal perspective, welfare conditionality employs three techniques: 16 (intensified) claimant obligations, (stricter) sanctions for non-compliance and (wider) control powers for the government to monitor and investigate sanctionable behaviour. While the first two elements are enjoying growing academic attention, the research landscape pertaining to the control powers of the government is still largely sparse. The aim of this article is to examine whether the growing repressive trend which can be observed in the areas of claimant obligations and welfare sanctions is also reflected in an unwarranted expansion of the control powers of the government.
More specifically, the article critically addresses the administrative powers to link and analyse the personal data of welfare beneficiaries for the prevention and detection of welfare fraud. Section 2 sheds light on the importance of automated data processing for public administration and highlights the role of data protection in safeguarding the basic rights of citizens who are dependent on the provision of welfare benefits. Section 3 provides a normative yardstick by exploring some of the requirements that the EU data protection framework prescribes to data processing in the public sector for the purposes of welfare fraud prevention. Finally, Section 4 reports on three country-specific case studies on Germany, the United Kingdom and the Netherlands. The case studies evaluate the extent to which the adopted national control powers are in accordance with the basic right to data protection. The article concludes by elaborating on the differences between the levels of protection in the three countries and recommends strategies for improving the data protection position of welfare beneficiaries.
2. The importance of personal data (protection) in relation to welfare fraud
The rapid digitalisation of the public sector has increased the relevance of personal data for the modern welfare state. Generally, states dispose of vast amounts of personal data spread across their administrative branches. Municipalities possess detailed personal records relating to the living situation of beneficiaries, tax authorities can reveal information about any (side-) income of individuals, and unemployment agencies have data on previous periods of unemployment. By processing the relevant personal data, the welfare administration can easily determine whether a recipient of social assistance has fulfilled the conditions attached to the benefit, or whether he or she has provided the administration with incorrect information. In practice, the welfare administration collects the necessary personal information from the relevant public bodies and links it together in order to analyse it with the help of algorithms. The automation of this process significantly increases the efficiency of the operation, which can be completed within a fraction of the time that would be needed to carry out a comparable analysis manually.
While it must be acknowledged that the use of automated data processing techniques increases the efficiency of welfare administration when investigating benefit fraud, it must be also pointed out that such practices may have adverse effects on the legal position of citizens. These effects are very well illustrated in the Census-judgment 17 of the German Federal Constitutional Court from 1983, which laid the foundation for the country’s national data protection framework. As duly noted by the Court, the shift from manual processing of personal information stored on paper towards automated processing has enabled the state to process much larger amounts of personal data nearly instantaneously and regardless of the distance between the locations where the data are stored. Furthermore, the potential to link personal data allows governments to create very detailed personal profiles without allowing citizens to exercise control over their accuracy or the use of their personal data. The Federal Constitutional Court observed that this ‘previously unseen’ expansion in the possibilities for the state to process personal data exerts ‘psychological pressure’ on individuals which may prompt them to adapt their behaviour. This touches on the right of citizens to develop and protect their personality within an autonomous area of private life. From the point of view of personal autonomy and (informational) self-determination, citizens should be able to foresee and control the potential uses of their personal data.
The considerations of the German Federal Constitutional Court sketched above prove to be especially relevant in the sensitive area of social security. A primary function of social security is to prevent social exclusion that, according to its definition, ‘precludes full participation in the normatively prescribed activities of a given society and denies access to information, resources, sociability, recognition, and identity, eroding self-respect and reducing capabilities to achieve personal goals.’ 18 The abolition of discriminatory and degrading treatment of people who are dependent on the payment of welfare benefits is an important factor in the promotion of social inclusion. Extensive control measures signal a lack of trust in the state by its most vulnerable citizens, which underpins the urge to increasingly monitor their behaviour. This approach can be counter-productive for the social reintegration of beneficiaries because it reinforces the existing stigmatisation of benefit claimants by openly questioning their trustworthiness.
In summary, the automated processing of personal data is an important, efficient tool which the modern welfare state has at its disposal in the legitimate effort to counteract the irregular use of welfare benefits. At the same time, however, the inflation of governmental informational powers can have a significant negative impact on the private lives of beneficiaries and on the inclusive function of social security. One of the main objectives of data protection reveals itself here – to strike a balance between the public interest pursued by the processing of personal data and the basic rights of the individual concerned. The next section examines the mechanisms employed by the EU data protection framework to this end.
3. The EU framework on data protection
3.1 The EU General Data Protection Regulation (GDPR)
The GDPR 19 lies at the heart of the EU data protection framework. It came into force in May 2018, the Regulation with the objective of ensuring a consistent and high level of protection for individuals with regard to their personal data. Accordingly, it has a wide scope of application, 20 which covers both the private and the public sector. Personal data is defined very broadly as ‘any information relating to an identified or identifiable natural person.’ The term ‘processing’ can also be broadly interpreted – it covers any set of operations that is performed on personal data, including their collection, organisation, storage, adaptation, retrieval, erasure or destruction. The Regulation refers to the person whose data are being processed as the ‘data subject’ while reserving the term ‘data controller’ for the body responsible for the processing of personal data. The data controller determines the purposes and the means for processing and can be a natural or legal person, as well as a public authority or an agency.
The GDPR allows personal data to be processed only when this is in accordance with the core principles of data protection laid down in Article 5, the first three of which are briefly described for the purposes of this article. The principle of lawful, transparent and fair processing requires that the collection and further processing of personal data are based on a legal ground and that the data subjects are informed about the use to which their data is put. Furthermore, the purpose limitation principle requires that personal data are collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with these purposes. The related principle of ‘data minimisation’ prescribes that personal data are adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed. The core principles of data protection in Article 5 GDPR are formulated in a very general manner, however, and the remaining provisions of the Regulation provide a certain degree of concretisation that is required for their application in practice.
Article 6 GDPR defines the qualitative requirements which apply to the legal grounds for the processing of personal data. With specific attention to the public sector, the Regulation allows national bodies to process personal data without the consent of the data subject when this is based on national legislation. 21 The respective legal acts are subjected to a proportionality test in order to ensure that the state administration does not collect and process more personal data than is strictly necessary for achieving the purpose of the data processing. 22 The purpose for which personal data is processed thus provides an important point of reference when applying the abstract proportionality test to the legal ground on which the operation is based. It gives an indication of the necessary extent of the data processing, including the categories of personal data that need to be processed. By doing so, it makes possible the application of the proportionality test and the examination of the lawfulness, fairness and transparency of the data processing operation.
The focus of this section is directed at examining two sets of requirements which are particularly important in the context of the control powers used by governments for welfare fraud prevention. In the first place, the purpose limitation principle and its implications are examined. As noted above, this principle is of central importance to the application of other data protection safeguards and, therefore, is a key factor to the effective protection of personal data in general. In the second place, this section examines the transparency rights of welfare beneficiaries which are an indispensable tool for ensuring that citizens remain in control of their data. 23 Finally, it critically addresses the notable exceptions made by the GDPR to the application of the purpose limitation principle and the transparency rights of individuals in the field of social security.
3.2 The purpose limitation principle
In the words of Article 5(1)(b) GDPR, the ‘purpose limitation principle’ requires that personal data be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with these purposes. 24 Due to a lack of relevant case law and policy documents, the most feasible way to gather further information on the required degree of purpose is to consult the opinion on purpose limitation of the EU data protection advisory body Article 29 Working Party (WP29). 25
The ‘purpose limitation principle’ builds on the notion that the purposes of the data processing must be explicit and specified. The ‘criterion of explicitness’ demands that the purposes for the processing of personal data be clearly revealed, explained or expressed in some intelligible form. 26 The specification criterion requires that the purposes be sufficiently defined to enable the implementation of any necessary data protection safeguards and to delimit the scope of the processing operation. The controller must carefully consider what purposes the personal data will be used for and must not collect personal data that are not necessary, adequate or relevant for the purposes that are intended to be served. 27
In the words of the WP29 advisory body, a purpose that is vague or general will usually not meet the criteria of being specific. 28 Obviously, the required degree of specification calls for a case-by-case examination in which all relevant circumstances are considered. The WP29 has provided practical examples of purpose formulations that it considers too general: ‘marketing purposes’, ‘IT-purposes’ and ‘future research’. 29 All these purposes share the feature that they can easily be specified further by providing more detail: Which kind of marketing, which area of IT and which type of research? These examples show that a purpose formulation will not satisfy the requirement of purpose specification if its scope can easily be narrowed down by providing further details regarding the particular field or the context in which it applies.
The general guidelines provided by the WP29 advisory body can be transposed to the situation of welfare fraud investigation to extract the criteria that can be applied to the underlying national legislation. Firstly, the national legislation in question must satisfy the requirement of explicitness by establishing a link between the processing of personal data and the purposes that it serves. When consulting the respective legislation, citizens must be able to recognise that their personal data is collected and examined for the purposes of fraud investigation.
Secondly, the purpose of the national legislation must meet the requirements of specification. In the light of the examples discussed above, a general, broad purpose formulations such as ‘fraud investigation in social security’ should be considered too vague to satisfy the specification criterion. This purpose formulation can be further specified by determining the field of social security in which fraud is being investigated. Social security is a bundle of arrangements put into place to provide relief against occurring social risks. The array of covered social risks is a wide one, ranging from (temporary) unemployment and sickness to child support, old-age and social care. Each of these arrangements has its own distinctive characteristics, and this is reflected in the conditions attached to the benefits, which vary across different social security schemes. It is important to acknowledge this divergence in the benefit conditions because it leads to the conclusion that the government needs to collect and analyse different sets and categories of personal data in order to detect non-compliance under the various social security schemes: The welfare administration might have legitimate reasons to know the water and electricity consumption of a benefits recipient under a needs-based scheme (social assistance), however this information would be completely irrelevant under an unemployment insurance scheme.
These practical examples bring us back to the essential function of the purpose limitation principle, which the WP29 once labelled as the ‘cornerstone of data protection.’ 30 Some of the central safeguards in data protection law can only be applied effectively when the purposes of the data processing are specified with sufficient precision. When applying the proportionality test to a broadly formulated purpose such as ‘fraud investigation in social security’, the range of personal data that may be considered necessary to collect and further process is much greater compared to narrower, sector-specific purpose definitions such as ‘fraud prevention in unemployment insurance’. By adopting legislation that pursues such general purposes, governments maximise the outreach of their control powers while simultaneously undermining the data protection rights of the concerned individuals. From the perspective of an effective application of the purpose limitation principle and other data protection safeguards, governments should be held responsible for adopting legislation which pursues well-defined, explicit and narrow purposes for the processing of personal data. National legislation that supports the control powers of the government should be limited to specific social security arrangements. This would set reasonable limits on the scope of the control powers of the government. Furthermore, as we shall see in the next section, this would also benefit the transparency rights of welfare recipients.
3.3 Transparency rights of welfare recipients
As already mentioned, the function of the purpose limitation principle is not only to set limits on the scope of the processing operation, but also to facilitate the meaningful exercise of some of the other data-subject rights enshrined in the GDPR. An important cluster of rights relates to the transparency of data processing that is one of the central principles of the GDPR laid down in Article 5(1). The concrete requirements for safeguarding the transparency of data processing can be found in Section 2 of the GDPR (‘Information and access to personal data’). Articles 13 and 14 GDPR specify the information duties for data controllers when processing personal data. The main difference between the two provisions lies in their scope of application. Article 13 applies to the situation where the personal data are collected from the data subject, while Article 14 covers the cases where the processed data have not been obtained from the data subject. Article 15 GDPR, in turn, establishes the corresponding right of access which guarantees the access of data subjects to the relevant information relating to the processing of their personal data. As Wachter et al put it, ‘[t]ogether, Articles 13-15 form what has been called the ‘Magna Carta’ of data subject’s rights to obtain information about the data help about them, and to scrutinize the legitimacy of the data processing.’ 31
The array of information covered by the transparency rights of data subjects under the GDPR is a broad one. Welfare beneficiaries have the right to know the identity of the controller of their personal data and the purposes of the data processing as well as the categories of personal data concerned. 32 Furthermore, the welfare administration must provide additional information in cases where it is making use of automated decision-making (including profiling 33 ) within the meaning of Article 22 GDPR: Data subjects have the right to be made aware of the use of these techniques and furthermore receive meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing. 34 Research carried out by Wachter et al conclude that the practical added value of this right is limited and that it does not amount to what is dubbed in academic circles a ‘right to explanation of automated decision-making.’ 35 The first limitation lies in Article 22 GDPR which requires that a decision is based solely on automated processing. In cases where (nominal) human intervention has taken place, the respective transparency right of the data subject does not apply. Nevertheless, even if a particular case does fall under the scope of application of Article 22 GDPR, the ‘meaningful information’ is limited to an ex ante explanation of system functionality instead of an ex post explanation of the exact logic used by an algorithm to reach a specific decision. 36
3.4 Data protection in the limbo of ‘margin for manoeuvre’
An unavoidable characteristic of law is that not all requirements that are desirable from a theoretical point of view are always translated into practice. The EU law-making process is subject to national influences, which may result in the introduction of exceptions for certain situations. The adoption of the GDPR is a good example of such political compromise – the original proposal submitted by the Commission 37 was subject to a record number of amendments (over 3000) before it could pass in the parliamentary Committee on Civil Liberties, Justice and Home Affairs (LIBE). 38 More amendments were agreed on later in the lifecycle of the decision-making process by the European Council and European Paraliament. 39
Some of the amendments were clearly pushed forward with the objective of limiting the scope of the safeguards initially proposed by the Commission with regard to cases where governments process personal data in the public interest. The legal instrument to facilitate this was found in the old formula ‘margin for manoeuvre’. This formula is a remnant of Recital 9 of the EU Data Protection Directive:
40
‘Member States will be left a margin for manoeuvre […] whereas, within the limits of this margin for manoeuvre […], disparities could arise in the implementation of the Directive.’ The practical implications of this ‘margin for manoeuvre’ under the Directive were initially left unclear. In 2003, the Court of Justice of the EU (CJEU) adopted its Lindqvist judgment in which the Court addressed the balance between the full harmonisation of data protection in the EU and the ‘margin for manoeuvre’ for Member States in abstract terms:
41
It is true that Directive 95/46 allows the Member States a margin for manoeuvre in certain areas and authorises them to maintain or introduce particular rules for specific situations as a large number of its provisions demonstrate. However, such possibilities must be made use of in the manner provided for by Directive 95/46 and in accordance with its objective of maintaining a balance between the free movement of personal data and the protection of private life. […] inter alia: the general conditions governing the lawfulness of processing by the controller; the types of data which are subject to the processing; the data subjects concerned; the entities to, and the purposes for which, the personal data may be disclosed; the purpose limitation; storage periods; and processing operations and processing procedures, including measures to ensure lawful and fair processing such as those for other specific processing situations as provided for in Chapter IX […]
Until the present day there is a lack of clarity on the motives of the EU legislator for resurrecting the margin for manoeuvre formula with all its restricting implications. Taking into consideration that the important amendments to Article 6(3) GDPR were made by German MEPs, one assumption is that the leeway for member states was created in response to concerns that the introduction of the GDPR could have a negative impact on the high level of data protection that was already established in some countries prior to the Regulation. 44 By allowing national legislation to further specify the rules of the GDPR, these countries could continue pursuing the high standards of national protection. This, however, is a double-edged sword. By effectively limiting the extent of full harmonisation, the margin of manoeuvre and related limitations that allow certain member states to offer a higher standard of protection, but are simultaneously counterproductive for other countries which, for one reason or another, fail to deliver the desired level of protection.
4. Country-specific case studies
To shed light on the application of data protection standards in the contemporary welfare state, this Section presents three case studies that were carried out on Germany, the United Kingdom and the Netherlands. These countries were chosen because of their close geographical proximity and their strongly divergent legal traditions. The case studies examine the mechanisms that have been used by national governments to link and analyse personal data in the fight against welfare fraud. In addition to describing the functioning of these mechanisms, the analysis examines whether the underpinning national legislation and its implementation in practice by the welfare administration is in accordance with EU data protection standards, more specifically with regard to the purpose limitation principle and the transparency rights of welfare beneficiaries.
4.1 Germany
Over the years, Germany has gained the reputation as one of the frontrunners in data protection. In 1970, the state of Hesse enacted the world’s first data protection law 45 which paved the way for the German Federal Data Protection Act (BDSG) 46 almost 8 years later. In 1983, the German Federal Constitutional Court adopted its landmark decision in the Census-case 47 in which it recognised the right to informational self-determination as a basic right with a legal anchor in the national constitution (Articles 2(1) and 1 German Basic Law). 48 Furthermore, the German framework pays special attention to data protection in the field of social security. This is partially due to the principle of confidentiality of social security data, 49 which builds on the notion that welfare recipients should not suffer a degradation in the standard of data protection when compared to other citizens solely by reason that their livelihood depends on welfare benefits. 50 For that purpose, the German legislator has introduced a separate data protection regime which applies specifically to welfare arrangements. The relevant rules are codified in national social security legislation acts and a common framework is established in §§ 67-85 Sozialgesetzbuch X (SGB X). These specific rules take precedence over the general data protection regime, which is regulated in the German data protection act (the Bundesdatenschutzgesetz).
German social security legislation contains two legal grounds that allow the welfare administration to periodically link and examine large sets of personal data in the battle against the illegitimate use of welfare benefits: § 52 SGB II and § 118 SGB XII. § 52 SGB II applies to the general unemployment assistance scheme known as Hartz IV. Based on this provision, the competent public bodies are obliged to transfer, link and examine personal data four times every year. Strictly speaking, the purpose of the data processing is not defined explicitly in the legislative provision, which is a noticeable shortcoming. 51 Nevertheless, the purpose definition can be established contextually, by means of systematic interpretation. 52 Furthermore, the principle of data minimisation seems not to have been impacted negatively by the lack of explicit purpose definition. The scope of personal data that can be processed under § 52 SGB II is limited to what is strictly necessary to control compliance under the general social assistance scheme. It covers information on other welfare benefits that an individual receives, general taxation data and a minimum of personal identification data – name, place and date of birth, address and social insurance number.
§ 118 SGB XII creates a similar competence for compliance control under the social assistance schemes that are regulated in SGB XII. These schemes form the outermost safety net of the German welfare system and can be relied on by people whose basic needs are not covered by other welfare arrangements. There are several important differences between the legal ground in question and § 52 SGB II. In the first place, the decision to analyse personal data is subject to the discretion of the responsible bodies that can, but are not obliged to, carry out the data operations ‘frequently’. Secondly, § 118(4) SGB XII contains an explicit formulation of the purpose of the data processing: The prevention of the illegitimate use of benefits under one of the social security arrangements regulated in SGB XII. Also, the scope of personal data that can be processed is modified in accordance with this purpose definition. Furthermore as regards information about benefits paid under other social security schemes and general taxation data, § 118 SGB XII allows the processing of information regarding the costs for rent, electricity, gas, water and garbage disposal.
The examination above demonstrates that § 52 SGB II and § 118 SGB XII are designed in a sector-specific manner, in accordance with the purpose limitation principle of EU data protection law. As a result, the scope of personal data that can be processed by the responsible public bodies is limited to what is strictly necessary for controlling compliance under the particular social security system. This effectively sets limits on the scope of the control powers of the welfare administration and benefits the transparency rights of welfare beneficiaries. Nonetheless, there used to be a debate in German academic circles about whether the data matching powers in the SGB are in accordance with the basic right to informational self-determination guaranteed by the Social National Constitution. 53 In 2015, the German Federal Court confirmed the constitutionality of § 52 SGB II. 54 As one of the decisive factors influencing this decision, the court pointed at the restrictive, sector-specific design of the provision which aligns with the principle of specificity. 55
4.2 The United Kingdom
The analysis of personal data for the prevention and detection of welfare fraud has become common practice in the UK since it was introduced for the first time in the mid-1990s. Every two years, the British government carries out a massive data matching exercise under the National Fraud Initiative (NFI). The bi-annual exercises make use of data supplied by over 600 public authorities, including health authorities and government departments, as well as a growing number of private-sector bodies. 56 The initial focus of the NFI was on fraud in housing benefit and student award claims. However, the scope of the initiative has become much wider and now covers various public programmes, ranging from pensions and personal budgets to licenses for taxi drivers and market traders, public sector payroll and transport permits. 57 The current 2018-2019 exercise does not address unemployment benefit fraud, but nevertheless the British government has indicated its intention to do so by including Universal Credit in the scope of the data matching exercises. 58
The legal framework underpinning the NFI is embodied in Schedule 9 of the Local Audit and Accountability Act 2014. It empowers the Cabinet Office to conduct data matching exercises which, according to the statutory definition, involve a comparison of sets of data to determine how far they match (including the identification of any patterns and trends). 59 The scope of personal data which can be collected under this Act is practically unlimited. The Cabinet Office can collect personal data which may ‘reasonably’ be required for the purpose of conducting data matching exercises from any relevant national authority. 60 The provision of the data by the authority is mandatory. Nevertheless, if the Cabinet Office considers it appropriate, it can also collect personal data from any other ‘body or person,’ both in and outside of England. 61 The provision of personal data in this case is voluntary.
The practically unlimited powers which national legislation confers on the public administration are specified in a variety of non-binding governance arrangements that have been adopted by the Cabinet Office. 62 The most important one is the Code of Data Matching Practice which lays down a framework that can be applied to the whole lifecycle of a data matching exercise. 63 With regard to the criteria for selecting the relevant personal data for a particular exercise, the Code states that the Cabinet Office will only choose data sets that are to be matched where it has reasonable evidence to suggest that fraud may be occurring and this fraud is likely to be detected as a result of matching those data sets. 64 Furthermore, the scope of the data which can be processed must be limited to the minimum needed to undertake the matching exercise, to enable individuals to be identified accurately and to report results of sufficient quality to meet the purpose of preventing and detecting fraud. 65 Some additional information, for example on the rights of the data subject, is included in the recently revised privacy notice of the NFI. 66
The Cabinet Office regularly adopts and publishes data specification for every matching exercise. 67 The latest version applies to the 2018-2019 exercise. The scope of personal data that can be processed is clear and well delimited, as the data specification applicable to personal budgets demonstrates. 68 Besides some basic personal information needed to determine the identity of persons and their place of residence, it includes identifiers such as the person’s national insurance number, the unique property reference number and the claim reference number, as well as information regarding payments that were made in the past and payments made under other social security programmes (such as housing benefit and pensions income).
Overall, the non-binding governance instruments adopted by the Cabinet Office compensate for the notable shortcomings displayed by the extremely broad primary legislation. The purposes of the data processing are defined in a restrictive manner, and the categories of processed data are sufficiently specified. As a result, the scope of the data processing operations under the NFI is limited to the minimum necessary for revealing fraud under a specific public benefit scheme. From this perspective, the established practice resonates with the normative requirements posed by the purpose limitation principle of EU data protection law. Furthermore, the publicity given by the British governments respects the transparency rights of welfare recipients.
4.3 The Netherlands
The track record of the Dutch government in data protection issues surrounding welfare fraud is far from flawless. In 2003, a cooperation unit 69 was established in the Netherlands between various welfare administrative bodies, the tax authority, the Ministry of Justice, the police, the public prosecutor’s office and a number of municipalities. The purpose of this unit was to combat illegal work, welfare fraud, and tax fraud by leveraging the potential of linked personal data to create risk profiles. This immediately came to the attention of the Dutch Data Protection Authority (DPA), which adopted a document highlighting the various data protection rules in the context of welfare fraud prevention, with a specific focus on the scope of data processing as defined by the proportionality principle and the transparency rights of individuals. 70
In 2007, the Dutch DPA examined one of the early schemes deployed by the cooperation unit – project Waterproof. Within this project, the government linked the personal data of 35,000 citizens to investigate their water consumption as an indicator of possible address fraud, which eventually resulted in the detection of 42 fraud cases (a percentage of 0.12 per cent). 71 The findings of the DPA raise a number of concerns: In all of the cases, the investigation was carried out without prior suspicion of fraud, the purpose limitation principle was not respected due to the lack of appropriate legislation, and the transparency rights of individuals were breached. 72 The Dutch government made an attempt to partially remedy these concerns by introducing the so-called Black Box-method, which meant that personal data would first be anonymised before being linked and examined. Additionally, the new approach failed to address the issue of the non-existent legal basis and the neglected transparency rights of the data subjects, which triggered another very critical decision of the Dutch DPA in 2011. 73
The interventions by the DPA prompted the Dutch government to revise its data matching projects and provide them with a legal basis. This led to the birth of the project-based System for Risk Indication (Systeem Risico Indicatie / SyRI). 74 Each SyRI-project contains a predefined risk model which is based on indicators suggesting a higher probability that a person is committing benefit fraud. The collected personal data are encrypted and then linked together by the body that is responsible for carrying out the analysis. If the analysis indicates a match with the predefined risk model, the respective personal data are decrypted and transferred back to the welfare administration, which adds the risk notifications to a central register where these are kept for a period of two years. During this period, the notifications are accessible to the responsible administrative bodies which can carry out further investigation on a case-by-case basis.
As far as purpose limitation and data minimisation are concerned, SyRI displays noticeable shortcomings. The specification of the purposes served by the system can be found in Article 64(1) Wet SUWI: ‘For the purpose of an integral governmental approach in the fight against the illegitimate use of public funds and public provision in the area of social security, tax supplements, the fight against tax fraud and premiums fraud and the compliance with labour legislation […]’ This purpose formulation is very extensive and is not confined to a specific social security scheme. As a matter of fact, data processing under SyRI is not even limited to the whole body of social security arrangements, but also addresses issues of general taxation and labour law (e.g. compliance with minimum wage regulation). The Ministry of Social Affairs and Employment displayed an awareness of the wide scope of SyRI and tried to justify it in the following statement: ‘The choice for a broad purpose limitation is a conscious one and it aims to facilitate an integral approach by the government against illegitimate use, fraud and non-compliance with national legislation.’ 75 Nevertheless, a ‘broad purpose limitation’ is a contradictio in terminis which is clearly incompatible with the basic principles of data protection.
The ‘broad purpose limitation’ is reflected in the wide scope of personal data which is collected, linked and examined with the help of SyRI. Article 5a.1(3) Besluit SUWI defines 17 broadly formulated categories of personal data: employment data, data on administrative sanctions, fiscal data, data on real estate property, data on grounds of exclusion from welfare benefits, trade data, address data, identification data, data related to the integration of foreigners, historical data concerning compliance, educational data, pensions data, reintegration data, debt data, data concerning the enjoyment of social security benefits, data concerning permits and health insurance data. The broad scope of personal data which can be processed in SyRI is among the main points criticised by the DPA and the Dutch Council of State.
76
The latter body observed:
77
These categories are broad and extensive, and the data which falls thereunder can, in some cases, seriously interfere with the private sphere of an individual. The definition of the categories of personal data is meant to set limits on data processing (principle of data minimisation) but, in this case, the scope is so broad that it is hard to think of a category of personal data that does not fall under it. It looks as if this definition of processed personal data does not aim to set limits, but rather to confer the broadest powers possible.
While it is true that Article 5a.1(2) Besluit SUWI prescribes the specification of the purposes of each SyRI-project along with the relevant personal data, in practice this process takes place behind closed doors. Since the launch of SyRI in October 2013, the Dutch government has initiated only three projects, one of which was cancelled prematurely. The first information about these projects reached the public after a group of lawyers filed in several freedom of information requests. 80 The government awarded the requests only partially released limited information. What becomes clear from project applications is that the investigation is directed specifically at several socially weak neighbourhoods in Eindhoven and in the Rotterdam area. A look in the application document for the project in Eindhoven clearly shows that the information provided does not specify the concrete purposes of the project and the exact categories of personal data which are being processed. 81 Instead, it vaguely describes social problems in one of the neighbourhoods of the city such as ‘fraud and non-compliance with labour legislation’ that need to be tackled in order to ‘improve liveability in the neighbourhood.’ With regard to the specification of the processed personal data, the project application refers to a so-called ‘risk model for neighbourhood action’ 82 that contains a definition of risk indicators used by the SyRI-projects, providing some insight into the logic and algorithms involved. The corresponding document, however, was not made publicly accessible by the Dutch government.
Today, SyRI has evolved into a highly controversial issue. In March 2018, a group of human rights organisations and several prominent individuals filed a court case against the use of SyRI by the government. 83 The list of legal grounds on which the system has been challenged is long. The most important grounds relate to the scope of the powers, the broad purpose formulation, the unclear categories of personal data and the secret algorithms for compiling the risk profiles. In May 2018, Members of the Dutch Parliament confronted the Minister of Social Affairs with critical questions regarding the use of secret algorithms. 84 The expressed concern that the logic underpinning risk profiles should be made publicly available in order to prevent biases and potential discrimination was countered by the Minister with the argument that this would give criminals an advantage. In June 2018, a subsequent motion to at least allow technical audits of the used algorithms was blocked by the Secretary of State on similar grounds. The negative attitude of the Dutch government towards preserving the well-kept secrets surrounding SyRI carried on in January 2019, when the Secretary of State refused to answer parliamentary questions concerning both the internal workings of the system, as well as its concrete outcomes in the various projects. 85 The complete rejection of any form of accountability of the Dutch government towards the legislator regarding the bulk profiling of welfare beneficiaries is worrisome and unexplainable.
5. Conclusions
In the contemporary age of welfare conditionality, systems that link and analyse personal data are an indispensable tool in the fight against welfare fraud in Western Europe. This article reports on three country-specific analyses in order to examine whether the general repressive trend reflected in the adoption of more detailed claimant obligations and ever stricter sanctions is also accompanied by an unwarranted expansion of the respective control powers. The examination shows that, in each of countries examined here, national legislation has been adopted which provides the legal basis for the powers of the government to control compliance with conditions attached to welfare benefits. Nevertheless, the particular outcomes of the case studies are fundamentally different, and this can partially be explained in terms of the divergent legal traditions in Germany, the United Kingdom and the Netherlands.
In Germany, the strong constitutional embedding of the right to informational self-determination and the acknowledgment that welfare beneficiaries represent a vulnerable group that requires elaborate data protection rights, have translated into privacy-friendly legislation. By adopting sector-specific legislation, which clearly specifies the processed personal data and limits it to an absolute necessary minimum, the German legislator has set clear boundaries to the control powers of the state and has furthermore ensured that the anti-fraud systems operate in a transparent manner.
In the United Kingdom and the Netherlands, the legislators have chosen a more pragmatic approach by enacting broad legislation that creates practically unlimited powers for the welfare administration and simultaneously delegates the task to scope these powers to the executive. In the United Kingdom, the administrative acts that have been adopted live up to these expectations and the system, as a whole, respects the most important principles of the GDPR. The various NFI data matching exercises are accompanied by extensive policy documents. The non-binding guidelines clearly specify which personal data is processed per exercise, and the information made public by the British government ensures that transparency rights are respected.
In the Netherlands, the implementation of SyRI into practice displays alarming shortcomings. SyRI is the most privacy-intrusive anti-fraud system in the three countries examined in this article since it employs risk profiles to flag individual citizens. The negative impact of this on the right to data protection is amplified by the extremely broad scope of data processing possible under SyRI. When adopting the underlying legislation, which requires that the control powers are more clearly defined in the applications for the particular SyRI projects, the Dutch legislator was clearly aiming for more flexibility. In reality, this mechanism proves to be a dangerous failure. The existing project applications do not formulate specific purposes for the processing and the categories of processed personal data remain unclear. What might at first sight appear as a display of sloppiness turns out to be something much more worrying? After a number of parliamentary inquiries and freedom of information requests, the Dutch government still continues to deliberately prevent the release of information to the public concerning the processed categories of personal data, the logic of the algorithms and the outcomes of the projects.
While it is true that the GDPR provides the necessary leeway for Member States to neglect some core aspects of its rules when processing personal data for welfare fraud prevention, this approach is highly undesirable from the perspective of the right to social security. Any deviation from the standards of protection effectively means that welfare beneficiaries are subject to a lower level of basic rights protection as compared to ‘regular’ citizens who are not dependent on state support. This is problematic in light of the effective exercise of the internationally protected right of social security and social assistance 86 which requires that welfare beneficiaries ‘should not be prevented from exercising their civil and political rights in full.’ 87
Considering that the adoption of the GDPR was a politically cumbersome process and that social assistance is traditionally a sensitive matter, the expectation that the CJEU would directly scrutinise national legislation to restore the balance is unrealistic. Nevertheless, the Court could intervene in more indirect ways. One solution, which would help strengthen the position of welfare beneficiaries, would be to require Member States to enact legislation that explicitly specifies (the scope of) the limitations imposed on the basic right to data protection. This would ensure that such limitations are transparent and that they are subject to the democratic decision-making process and parliamentary control. Perhaps more importantly, there is also sufficient room for interventions by national courts. Drawing inspiration from the German case, national judges could assign more weight to the vulnerability of welfare beneficiaries as a factor when scrutinising state control measures.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
