Abstract
Within the context of digital automation and profiling in the public sector, the rule of law and its inherent principle of legal certainty are highly debated concepts in relation to the desirable values and norms of equal treatment, transparency, and impartiality. However, scholars and policymakers disagree over whether automated decision-making (ADM) is beneficial for legal certainty. This debate highlights the ambiguity embedded in the substantive meaning of legal certainty. This article aims to analyze how the principle of legal certainty is interpreted and defined during the practical application of ADM in welfare services and to discuss the theoretical prerequisites for these definitions to be realized in ADM processes. The empirical case is the Swedish Public Employment Service, which makes extensive use of a statistical ADM tool for decision-making about whether or not to provide support to jobseekers. While the implementation of ADM by welfare institutions has been encouraged due to the assumption that it strengthens public and democratic principles, the study shows that, in practice, ADM processes are perceived as non-transparent and generate a relatively large proportion of incorrect decisions. This may be specifically disadvantageous for vulnerable individuals, who run the risk of being incorrectly denied the right kind of support while at the same time having a greater need for welfare support. The widespread future use of ADM in welfare services may affect how welfare rights and obligations and public principles are met in a new technological era.
Keywords
Introduction
The use of automated decision-making (ADM) is growing rapidly in public institutions and administrations across the western world (Civitarese Matteucci, 2021; Coglianese and Lehr, 2019; Di Giulio and Vecchi, 2023). This development is currently changing the settings for administrative procedures and citizens’ encounters with welfare services. Within the context of digital automation and profiling in the public sector, the rule of law and its inherent principle of legal certainty are highly debated concepts in relation to the desirable values and norms of equal treatment, transparency, and impartiality. However, both scholars and policymakers disagree over whether ADM is beneficial for the rule of law and procedures based on legal certainty (see Al-Mushayt, 2019; Civitarese Matteucci, 2021; Coglianese and Lehr, 2019; Eubanks, 2019; Gryz and Rojszczak, 2021).
This debate highlights the ambiguity embedded in the substantive meaning of the rule of law and legal certainty. While the rule of law provides important legitimation for modern governments, and legal certainty is considered “one of the requirements of the ideal of the
The present empirical case, which serves as an example of interpretations of legal certainty in ADM in welfare services, is the Swedish Public Employment Service (SPES), which currently makes extensive use of ADM in the form of a statistical tool that is used to make decisions on support to jobseekers (SPES 2021b: 8). In published reports, the SPES has argued that ADM has significant advantages in terms of efficiency (SPES, 2019a), aligning with recent budget reductions, as well as the recent privatization of several SPES support services. The reports also argue that ADM contributes to decision-making procedures that are more legally certain (SPES 2021c: 7). Increased efficiency and legal certainty are general and recurring arguments for introducing ADM into the public sector, as well as being important legal public principles in democratic states. However, Swedish law states that, in public administrative procedures, efficiency must not take precedence over legal certainty (SFS 2017: 900). Thus, in practice, there is a potential contradiction between promoting efficiency and promoting legal certainty.
In this article, I study the implementation of the ADM tool in light of the broad purpose of the SPES. This is, on the one hand, to support economic growth by effectively producing and mediating a workforce for the labor market and, on the other, to promote individual welfare by preventing social exclusion and poverty. In this sense, the authority operates within the tension between generating non-reproductive effects on capitalism by protecting citizens from exploitation and supporting the market (see Deflem, 2020). Moreover, the analysis investigates the relationships between legal procedures and non-judicial social conditions, such as social and economic relations and interests. Thus, interpretations and definitions of the principle of legal certainty are analyzed as a socio-political phenomenon (see Buckel, 2021; Zila, 1990). As this study shows, legal and social relations among public institutions, citizens, and the market are at stake. However, rather than the law itself being the object of study, it is the legitimation of the implementation of ADM with reference to the law and legal principles that takes center stage in the analysis.
Situating the study
This study is based on the critical discussions of automated and algorithmic decision-making in the public sector that are found in critical data studies (e.g., Allhutter et al., 2020; Illiadis and Russo, 2016), studies on responsible and explainable artificial intelligence (AI) (e.g., Floridi et al., 2018; Gryz and Rojszczak, 2021), and studies on AI and law (e.g., Civitarese Matteucci, 2021; Coglianese and Lehr, 2019; Smuha, 2021). The societal consequences of ADM are much debated, with the focus often being on the tendency and risk of ADM systems to produce or reproduce social discrimination (e.g., D’Ignazio and Klein, 2020; Eubanks, 2019; O’Neil, 2016), transparency problems (Kemper and Kolkman, 2019; Samek et al., 2019), and the limitations of professional discretion (Busch and Henriksen, 2018; de Boer and Raaphorst, 2021). These topics have received considerable attention within the context of both ADM systems in public institutions and legal perspectives on the public use of ADM.
While Busch and Henriksen (2018) have argued that the use of technological automation will support democratic values and strengthen the rule of law in the public sector, other studies have shown that algorithmic decisions and results are usually uncritically accepted by officials, although individuals subjected to an algorithmic decision risk being maltreated in this process (Lopez, 2021). As Smuha explained, “[t]he correlations drawn by the AI system can be inapplicable, biased or incorrect, but it is also possible that the legal rule applicable to the situation was erroneously embedded in the system” (Smuha, 2021: 8). At the same time, the societal harm and collective risks brought about by these kinds of algorithmic errors and discrimination are often neglected within legal studies (Hakkarainen, 2021; Smuha, 2021). While the focus here is mainly on regulatory perspectives on individual redress and service (Harasimiuk and Braun, 2021), accountability problems related to a lack of transparency in algorithmic decision-making are “undermining the broader societal interest of the rule of law” (Smuha, 2021: 8).
Proponents of ADM in the public sector stress the need for transparency and explicability. To gain legitimacy and trust, governments engaging in current digital technological developments must find ways to address problems with transparency (Ghallab, 2019; Reed, 2020). Researchers in the fields of responsible and explainable AI have contributed with knowledge about how the practical use of ADM can be improved in order to facilitate accountable and trustworthy decision-making. Scholars have also contributed with recommendations for policy actors and stakeholders, with the aim of shaping a good AI society (Arrieta et al., 2020; Floridi et al., 2018; Samek et al., 2019). These recommendations often concern issues of how transparency, fairness, and privacy can be transformed into technological practice and how to find technological solutions to the black-box problem of ADM systems. Scholars argue that while the EU’s General Data Protection Regulation (GDPR) aims to prevent individuals from being profiled in decisions that have significant impacts on their lives and to secure the “right to explanation,” there is still a lack of protection in practice due to unresolved transparency problems (Gryz and Rojszczak, 2021; Malgieri, 2019) and the tendency of public officials to uncritically accept automated decisions (Lopez, 2021; see also Kemper and Kolkman, 2019). Other scholars stress that transparency in ADM in the public sector can be achieved if governments give individuals the opportunity to question assessments and provide full information about the accuracy of the algorithm used in the decision-making, as well as the information and data upon which a given result is based (Al-Mushayt, 2019; Coglianese and Lehr, 2019). However, as Waldman (2019) and Andreassen et al. (2021) have emphasized, algorithmic decision-making obstructs officials from understanding how decisions come about, in turn preventing individuals from holding public institutions fully accountable for these decisions (see also Berman et al., 2022).
In sum, as Nordesjö et al. (2021) have reminded us, more studies are needed on whether ADM facilitates or limits officials’ discretion in terms of balancing welfare laws and regulations, cost-effectiveness, individual social and economic justice, and equality. Nevertheless, recent developments in Information and Communication Technologies “restructure power and authority” when introduced in the context of public administration, which, in turn, requires thoughtful implementation strategies (Di Giulio and Vecchi, 2023: 134). Traditionally, welfare bureaucrats have had a great deal of discretion to interpret rules and legal principles in light of their own assumptions and preferences (Lipsky, 2010), and these interpretations have manifested in decisions that affect the economic and social situation of the individual citizen (Dubois, 2014). In the context of public administration, rules and rule-following is a complex phenomenon. On the one hand, rule-following is a fundamental requirement in legitimate bureaucracies, on the other, there is a great discretion in terms of interpretations of rules at the street- or frontline level (Keulemans 2021). Since it has been shown that different kinds of procedures in the everyday processes in public service organizations affect how welfare bureaucrats approach legal enforcement (de Boer et al., 2018), it is of relevance to explore welfare bureaucrats’ interpretations of legal principles in the context of the introduction of ADM. As stated by Kim et al. (2021) there is a need for more empirical as well as theoretical studies on current digital developments in public administrations. With this study, I hope to contribute to the discussion by focusing on interpretations, definitions and realizations of legal certainty during the practical use of ADM. Because the principle of legal certainty is a significant instrument that can be used by public institutions to protect citizens from exploitation and unfair treatment, an analysis of how it is interpreted, defined and used in ADM is beneficial for identifying potential shifts in relationships among modern public institutions, the market, and citizens.
Defining and theorizing the principle of legal certainty
Theoretical understandings of the rule of law and legal certainty are seemingly coherent and consistent, with the concept of predictability being central. A well-referenced description of western law was given by Max Weber, who argued that, in modern society, the law is based on the intertwined core values of the predictability of legal outcomes and calculability in legal processing (Sterling and Moore, 1987; Weber, 1978). Yet, looking carefully at jurisprudential theory, several definitions and demarcations in the meaning of predictability and legal certainty emerge. This study assumes that these theoretical dividing lines must be recognized if we are to clearly investigate legal certainty within any legal process.
At a minimum, the rule of law refers to the legality and predictability of judicial decisions. This means that government officials and citizens should be ruled by and obey the law. It also means that the law should be formulated in advance, generally known, and understood (Tamanaha, 2012; see also Rawls, 1999). As Raz argued, “if the law is to be obeyed
In addition to the formal definition, there are theoretical perspectives emphasizing substantive and It is necessary to maintain a sharp analytical separation between the rule of law, democracy and human rights, as well as other good things we might want, like health and security, because mixing all of these together tends to obscure the essential reality that a society and government may comply with the rule of law, yet still be seriously flawed or wanting in various respects. (p. 236)
The second perspective refers to material aspects as internal conditions of the rule of law and legal certainty. The internal perspective emphasizes that substantive dimensions entail the realization of the law itself through practical law enforcement. Here, the material rule of law is not necessarily constituted by a universal societal norm but rather by the substantive meaning of the law itself. For juridical decisions to be predictable—i.e., legally certain—the rights, obligations, and objectives formulated in the law must be realized in practice (Lifante-Vidal, 2020; see also Zila, 1990).
Within welfare policies, such realization often relates to social justice and equality (Gustafsson, 2021). For example, the Swedish constitutional law The Instrument of Government (SFS 1974: 152) states: “The public institutions shall promote the opportunity for all to attain participation and equality in society” (SFS 1974: 152, chap. 1, art. 2). Here, from an internal perspective, compliance with the rule of law not only requires that decisions made by public institutions are based on a law that is general and publicly stated, but also that these decisions promote the equality and participation of citizens. However, this is not because the political norms of democratic participation and equality are socially legitimate—but rather, because the law itself formulates these obligations. If the law were not to include this sentence about participation and equality, participation and equality would thus not be rule-of-law requirements. As Lifante-Vidal (2020) has argued: [W]e could say that the predictability which we consider valuable and which, accordingly, the principle of legal certainty obliges us to maximise, is that which affects reasonably grounded expectations (i.e., expectations which have to be considered legitimate in the light of the principles and values recognised by the law itself). (p. 466)
From an internal perspective on the relationship between formal, procedural, and material aspects, all laws have a form, including legal procedures and formality, as well as substantive legal content. Here, legal certainty thus means that the rights and obligations of citizens formulated in law must correspond to practical law enforcement.
In this study, the theoretical perspectives on legal certainty are used to identify variations in interpretations of legal certainty in the empirical material. While such perspectives are compatible and overlap in reality, I analyze them as conceptual abstractions—i.e., ideal types (Weber, 1978)—in order to investigate similarities, deviations, and connections among different understandings of the legal certainty principle, as well as the possibilities and challenges of its realization within ADM processes.
Methodological discussion
Case description of the ADM tool used by the Swedish public employment service
Labor-market and unemployment policies have high political complexity in terms of conflicting interests and beliefs about policy problems and solutions (see Peters, 2021), and the policy objectives are generally vast and complex. The SPES is a national authority responsible for the Swedish public employment services and labor-market policy activities. The SPES is not regulated in detail, and therefore there is space for it to design its own activities. When this study was made, the SPES was mainly governed by the following regulations: SFS 2022a: 811, SFS 2022b: 1020, and AFFS 2014: 1. Its overall objectives are to bring jobseekers and employers together in an effective manner and hence to contribute to an increase in long-term employment. SPES activities are to be conducted in an efficient, uniform, and legally certain manner and should be adjusted to each individual jobseeker’s situation. The SPES is also mandated to work toward promoting diversity and equality and to counteract discrimination in working life (SFS 2022a: 811).
In Swedish unemployment policy, the so-called “labor-market policy assessment” is a central instrument in moving the unemployed into work. This assessment regulates the individual jobseeker’s obligations and rights in relation to receiving support, and determines the activities the jobseeker must undertake in order to move into employment. According to the regulations of the SPES, this assessment must be formulated with the participation of each jobseeker as an individual action plan. In 2019, the SPES received a directive from the Swedish government with the instruction to develop a statistical assessment support tool in order to improve resource efficiency and since 2020, the labor-market policy assessments have shifted from being personally conducted by employment officials to being automatically conducted by a statistical profiling tool—i.e., an ADM tool. This ADM tool calculates a jobseeker’s prospects on the job market in terms of the estimated risk of unemployment within 180 days (IFAU, 2021: 7). Based on this and the time of enrollment, the ADM tool then classifies the jobseeker into one of three categories—high, mediocre, and low prospects—and suggests whether the jobseeker should have access to labor-market training, which has been provided by procured private companies since 2019.
The ADM tool is trained on historical data from previous jobseekers. This data consists of about 30 parameters, such as education and work experience, gender, age, and place of residence. When a jobseeker contacts the SPES, employment officials run the ADM tool, which, based on this historical data, gives the result “YES” or “NO” regarding training activities. Those who are denied are perceived by the tool as either being too close to the labor market and therefore presumed to be able to apply for jobs on their own, or as too far from the labor market and therefore presumed to need another type of more in-depth support—for example, work training provided by the SPES itself (SPES, 2019b; IFAU, 2021: 7).
According to the GDPR, a profiling tool cannot be decisive, and the ADM tool at the SPES is framed as a support system. A public official must always make the final decision. However, the opportunities for officials to ignore the decisions of the ADM tool are limited in practice, and deviations from the automated decision should only be made in exceptional cases. If the ADM tool says “NO” to support activities, there are no formal alternatives to following that decision. If the tool says “YES,” there are nine stated and predetermined reasons for deviating from the decision. These apply in cases where the jobseeker will clearly not be able to participate in training activities—for example, if the jobseeker is pregnant, knows that they will be starting education in the near future, or has certain disabilities (IFAU, 2021: 7).
The ADM tool is a neural network model designed by engineers at the SPES. The model was initially introduced via a 1-year pilot project in a limited number of regions, and was then implemented in all regions. The ADM tool is intended to rapidly and continuously make decisions about job support and to efficiently provide companies with jobseekers. However, scholars have argued that one of the main problems with neural network models is their lack of transparency (Rudin, 2019). The model is often called black-box machine learning due to the inherent difficulties in interpreting the model’s decisions. In order to make decisions explainable, a second model is created to accompany and explain the first model in more communicative ways. Rudin has stressed that it is more important to create and use interpretive decision-making models rather than adding new models to interpret the black box (Rudin, 2019). Furthermore, the neural network model is known to be unstable in the sense that a small change in input can have a large effect on output. For example, two jobseekers with very similar backgrounds and demographics etc. can be assessed unnecessarily differently.
Empirical material
To capture the overarching interpretations of the relationship between the principle of legal certainty and ADM at the SPES, the present empirical analysis drew upon a wide range of empirical material, such as interviews, policy documents, published reports, and working materials. Together, the empirical material created an overview of the dominant assumptions about this relationship, as well as its embedded contradictions and potential consequences for legal and social relations. Fifteen interviews with managers, qualified strategists, and officials at the SPES at both national and regional levels were conducted in 2021. Each interview was approximately 45 to 90 min long. Thirteen interviews were conducted via Zoom, and two over the telephone. All the interviews were transcribed and translated from Swedish into English. The interviews involved questions about the background and process of the introduction of the ADM tool as well as questions about the qualitative difference between the ADM tool and the previous decision-making procedure. Furthermore, the interviews involved questions about the correspondence between ADM and public principles such as equality, efficiency and legal certainty. The same kind of questions were asked to all respondents and the respondents are in number equally representing managerial-strategical level and the level of public officials.
In total, the text material consisted of 16 published policy documents, reports, and other working materials. The text material was mainly written and published by the SPES, but some of the reports were published by other state auditing actors, such as the Institute for the Evaluation of Labor Market and Education Policy (IFAU) and the Swedish government’s official reports (SOU). Additionally, one publicly available podcast interview with a top manager at the SPES conducted by a tech consultant company was also transcribed and analyzed. The policy documents were used to analyze the SPES:s official descriptions of the ADM tool and its correspondence to public principles. Since the focus of this article is on interpretations of legal certainty in the practical use of ADM, the greatest emphasis of the analysis is based on the interviews.
Research design and methods
I applied a case-study research design (see Yin, 2003), with the SPES serving as an empirical example of how the principle of legal certainty is interpreted during the practical use of ADM in a welfare service organization. As the Swedish government is seeking to become “the best in the world at seizing the opportunities of digitalization” (Ministry of Enterprise and Innovation, 2018: 6, my translation) and has invested heavily in implementing digital techniques in the public sector, Sweden is a fruitful case to explore. Furthermore, the SPES is among the Swedish authorities that have made the most progress in applying ADM techniques. Hence, using a rich body of empirical material from the SPES case has contributed to capturing the particularities of interpretations of legal certainty and the challenges and opportunities that seem to be associated with legal certainty in ADM systems within the welfare services.
The empirical material was coded by the following three analytical questions: (1) (How) is ADM affecting decision-making processes at the employment service? (2) (How) is ADM perceived by public officials to be legally certain? (3) What are the theoretical prerequisites for these perceptions to be realized in the practical use of ADM? The three analytical questions also constitute a structure for the results section. In order to answer these analytical questions, I used qualitative interpretive methods inspired by ideal type methodology (Gerhardt, 1994; Weber, 1978). Interview questions about the process of introducing the ADM tool and its differences in comparison to the previous decision-making procedure related to analytical question 1 and interview questions about the relationship between the ADM tool and public principles related to analytical question 2 and 3. The analysis was conducted in two steps. Starting with the three theoretical definitions of the concept of legal certainty—i.e.,
Results
From individual to general assessments—ontological transformations of decisions
The function of the ADM tool is to calculate a jobseeker’s prospects on the job market in terms of a statistically estimated risk of unemployment. One manager at the SPES described the ADM tool in a podcast in 2020 as follows: [It is] an AI-based profiling tool that includes over a million historical jobseekers and what their profiles looked like. For each single newly registered jobseeker, we match that jobseeker to our AI-based profiling tool to determine how far this jobseeker is from getting a new job. (SPES manager 1, SPES, interviewed on the podcast
According to published reports from the SPES, the precision and accuracy of the profiling tool—i.e., the correctness of the calculation of a jobseeker’s prospects—are better than those of manual assessments performed by officials. Internal studies by the SPES have shown that the ADM tool has 68% accuracy, while officials had only 50% accuracy (see SPES, 2021a: 9). According to another manager at the SPES, this is not a surprising result. He explained this in the following way: If we have an employment official who meets a jobseeker who talks about his or her life, the official can have a giant chart, calculate in Excel, and compare the information with a lot of data. By the time the official is finished, it will have taken several weeks to get the results about the jobseeker’s prospects. Now, instead, we’ve built a machine that does this at lightning speed and displays job prospects right away. (SPES manager 2, interview)
However, this comparison between the ADM tool and human officials assumes that the tool and manual assessment perform the same kind of assessment. According to officials, this is not actually the case. While the ADM tool provides a statistical estimate of the risk of unemployment within 180 days, the manual assessment includes more personal information about the individual’s need for support and their own hopes for the future in the information provided about education, work experience, gender, age, etc. While the ADM tool generates a I cannot and must not make my own assessments any longer, which is unlike how it was before, when I could make individual assessments on a case-by-case basis. […] If I were to put a label on my own professional role, I’m more of an administrator today, I would say. (Employment official 1, interview)
The parameters upon which the ADM tool are based affect the kind of assessment that can be made. As another official observed: “There is a lot of information about a person that is not visible in all of these parameters.” This indicates that the aim of the assessment has been transformed and that “the assessment is not fully a labor-market assessment based on needs” anymore (Employment official 2, interview). According to an internal SPES study, the ADM tool generally makes incorrect statistical calculations of the risk of unemployment in 32% of cases, and provides even more incorrect decisions when the decision is negative, especially in cases where the jobseeker is a vulnerable individual (SPES, 2021a: 9). However, according to officials, the larger problem with the ADM tool relates to the shift from individual assessments to general assessments (see Eubanks, 2019). This is the theme of the next section.
Incorrect but legally certain decisions?
Interviewer: Is it your opinion that the decisions of the ADM tool are of good quality? Employment Official 1: No, it is not.
The employment officials interviewed in this study described the ADM tool as being incorrect relatively often. This was also highlighted in an external audit, which showed that all of the officials interviewed for the report had stressed that ADM decisions often conflicted with their own professional judgements, thereby negatively affecting their perceptions of the predictability of decisions (IFAU, 2021: 7). One official interviewed in my study painted the following picture of the matter: It’s a very simple tool for us to use. The assessment itself is very fast. You get a result immediately. However, I can also see limitations […]. For example, sometimes, there are jobseekers who need other kinds of support, and it’s difficult for a computer to take that into account because it has a preset program that it runs. It knows what parameters it should base its assessment on, for example, age, place of residence, and gender, but that doesn’t say anything about the jobseeker’s own ability or needs. (Employment official 3, interview)
The ADM tool has been described by employment officials as inflexible. This problem of inflexibility was also stressed in an internal SPES report, which stated that, although the profiling tool achieves greater accuracy than human officials when calculating the risk of unemployment, it still lacks the ability to make complex judgements (SPES, 2021c: 7). According to one employment official at the SPES, the problem of algorithms not being able to capture important yet unprogrammed personal information becomes even greater due to the restrictions placed on officials that limit their freedom to deviate from the ADM tool’s decisions: “There have been fairly clear instructions that personal interpretations should be absolutely as minimal as possible” (Employment official 5, interview).
The lack of professional discretion, in combination with the experience that the profiling tool sometimes makes incorrect decisions, was noted as a cause of frustration among officials (see also Allhutter et al., 2020; Andreassen et al., 2021). About this, one interviewee said the following: … sometimes [the ADM tool] makes completely insane decisions. I meet people who have a university education and relevant work experience and yet the robot says that this person is too far from the labor market and should not be assigned [job support]. And I wonder, how can the robot come up with that result? That is a completely insane result that the robot sometimes produces. It can be the opposite as well—the second category of people who are basically uneducated and lack work experience, and the robot says this person should not be assigned [job support] because this person is too close to the labor market. I had never made such mistakes. I had never never never never never ever done that. I can honestly say that. We all make mistakes and misjudgments in both private and professional life. I’m not infallible. We all make mistakes. However, I can honestly say that I have never made that kind of mistake and will never make such strange assessments in my professional practice. (Employment official 1, interview)
Interestingly, while the officials interviewed in both my study and the external audit report stated that the ADM tool has shortcomings linked to the quality of its decisions, most of them nevertheless believed that the ADM tool generates legally certain decisions (see, e.g., IFAU, 2021: 7). The dialogue below is an illustration of this: Employment official 4: Sometimes, there are very strange assessments, I think. Interviewer: Can you see any benefits of the technology? Employment official 4: Yes, I think it contributes to us working more equally. It’s more legally certain for all jobseekers. Everyone gets the same kind of assessment.
Hence, the standard for what is perceived as fair and legally certain is here determined on the basis of the treatment of jobseekers as a group, not primarily on the basis of each individual jobseeker’s needs and rights in relation to public laws and regulations (see Eubanks, 2019). The decision-making procedure is perceived as predictable, but the decision itself is perceived as unpredictable. This raises questions about how the principle of legal certainty is defined, and what equal treatment really means.
Interpretations of legal certainty
Officials are not the only people claiming that the ADM tool is more legally certain than manual decisions. An internal report stated that automated decision-making, in addition to increased efficiency (SPES, 2019b: 3; SPES, 2021c: 7), also aims to increase legal certainty (SPES, 2019a). One reason why legal certainty is considered to be strengthened by the ADM tool is precisely the fact that human judgement is limited: “A problem for equivalence and legal certainty in the processing of cases is, among other things, a strong personal dependence” (SPES, 2021c: 7: 11, my translation). Human decisions are considered to increase the risk of discrimination and unfair treatment, while the ADM tool is more accurate and impartial and generates decisions that create more equality and uniformity among various local SPES offices: “The digital assessment is more objective, equivalent, and uniform than a manual assessment” (Qualified strategist 1, interview). This perception is present at all levels of the SPES. One qualified strategist described the main benefits of the ADM tool as follows: When you perform an automatic assessment, you get uniformity because everyone goes through the same template. When a human being makes the assessment, the human factors are already built in, and you make an assessment based on your own background, based on your preferences, your prejudices. […] So, [the ADM tool] is beneficial for efficiency and legal certainty, I would say. (Qualified strategist 2, interview)
Employment officials argued similarly. One of them reasoned as follows: [The ADM tool] makes it similar for everyone, you go for a template, a square template, which means that, as an individual official, I can’t make any decisions of my own. The process will be the same for everyone, whether the jobseeker talks to me or another official with different experiences and starting points. (Employment official 2, interview)
In the empirical material, the interpretation of legal certainty is centered around its I believe that the possibility of individual solutions must exist in order for the decision to be legally certain. If someone has very specific, individual problems or a complex situation, there must be the possibility for an authority to take that into account. (Employment official 5, interview)
This interviewee suggested that decisions made by the SPES must correspond to the rights and obligations formulated in regulations and individual action plans in order to be fully legally certain (see also SOU 2018: 25: 160). This perception is in line with the theoretical definition of internal perspectives on
Within the SPES, legal certainty is often understood as being promoted by the ADM tool’s support of equal treatment and impartiality, and these principles are, in turn, promoted by the lack of subjective manual assessments. While some interviewees were critical of this negative view of human decision-making, arguing that most officials at the SPES are genuinely willing to help and support jobseekers by providing equal and fair treatment, external audit reports have shown that historically there has been a certain degree of maltreatment, mainly of female jobseekers. Among other things, it has been stated that women have received less support than male jobseekers (SOU 2022: 4). Paradoxically, the ADM tool is assumed to remedy this problem even though, as described above, it is trained on historical data about previous jobseekers, including previous manual decisions on support. Thus, rather than the ADM tool making independent, neutral, and impartial decisions, its training is partially based on potentially discriminatory decisions made historically (see Eubanks, 2019).
The potential problem of the ADM tool reproducing inequality and discrimination has not received significant attention within the SPES. On the contrary, it has been assumed that the tool will help to ensure impartiality and equal treatment. During the interviews, questions about this paradox were raised, and the answers revealed that qualified strategists and managers, who are responsible for developing and implementing the ADM tool, are unsure about how to tackle problems of discrimination. Below are two examples: Interviewer: If you build the technique based on previous manual decisions, does it reduce the risk of discrimination? Manager 3: Yes, I would definitely say that. Or, rather, if we build in something that can be discriminatory, at least it affects everyone [laughter]. Interviewer: If we know that there have been problems with discrimination in previous manual decisions, how do we know that this information is not affecting the new decision-making system? Manager 2: Well, I guess you’re right. If historical decisions are used and they’ve been made with some element of discrimination, then the data is affected a bit. […] But eventually, this will be washed away, but now we’re talking about things I don’t know much about.
Accordingly, there is a risk that the ADM tool is being trained on discriminatory mechanisms, but this is not widely considered within the SPES. However, there are internal reports stressing accuracy problems related to discrimination. For example, the accuracy of the ADM tool is lower when the decision about support is “NO.” Thus, at the same time as the opportunity for officials to deviate from the automated decision is very limited, some individuals are wrongly denied support, and the risk of receiving a wrongly negative decision has been shown to be higher for individuals with disabilities or individuals not born in Sweden (SPES, 2021a: 9; SPES, 2022).
There also seems to be a lack of information regarding technical aspects of the ADM tool among qualified strategists, managers, and officials. Some of the interviewees described the tool as a “black box”—that is, difficult or impossible for humans to fully understand. One qualified strategist explained this as follows: This black box…you don’t really know why it’s come up with a certain decision. […] I think the instructions to officials say that they can attempt to explain what parameters are entered into this machine. However, they still can’t explain exactly why this decision was made in [a specific] case, because the decision is based on
Consequently, the experience of the technology as a black box affects its predictability in individual cases. To cope with this, officials have developed strategies to avoid confrontational situations with jobseekers: Employment official 3: It feels like there’s a lot going on behind the scenes and not everyone has access to this information. I absolutely think it is a black box. Interviewer: So, how do you cope with that in your communication with jobseekers? Employment official 3: If jobseekers ask why they get a certain decision, then I usually try not to make it too complicated. I usually simply say that “based on [these parameters], I think this will be a good decision for you.” So, I try to put it as superficially as possible and not go into detail, because I feel that I risk getting questions I won’t be able to answer. I don’t want to mess it up too much but, rather, to keep it short.
Due to the lack of technological transparency, communication between officials and jobseekers becomes limited, which means that the participation and influence of jobseekers decreases. This has been acknowledged by officials at all levels of the SPES. One manager explained that “the participation of jobseekers is non-existent in the assessment.” With this statement, the interviewee is outlining something they consider to be positive for the quality of decisions: “In order to get as good an assessment as possible, it should be based on actual data that we actually know and not on what we ask the person” (Manager 3, interview). However, officials at the local level were more skeptical about the lack of participation. One of them said: The group which we know has difficulty in the labor market, those with a lower level of education and people who are foreign-born, with limited Swedish. This system is not adapted for them at all. They need someone to talk to. (Employment official 2, interview)
The limited communication with and participation by jobseekers can also be considered a way of limiting the individual assessments in decision-making.
Based on the empirical findings on interpretations and definitions of legal certainty at SPES, the next section will discuss the theoretical prerequisites for these to be realized in the practical use of ADM.
The realization of legal certainty in ADM
As scholars and policymakers disagree over whether ADM is beneficial for legal certainty (see Al-Mushayt, 2019; Civitarese Matteucci, 2021; Coglianese and Lehr, 2019; Eubanks, 2019; Gryz and Rojszczak, 2021), the aim of this study is to analyze how the principle of legal certainty is interpreted and defined during the practical application of ADM in welfare services and to discuss the theoretical prerequisites for these definitions to be realized in ADM processes. The general interpretation and definition of legal certainty found in the empirical material gathered for this study and used in the legitimation of ADM is rooted in an emphasis on its formal and calculative procedural aspects. However, based on the theoretically derived definitions of the rule of law and legal certainty, the possibilities of realizing legal certainty in practical ADM processes are limited in terms of both formal and material aspects. From a procedural perspective, the decision-making process can be regarded as uniform in terms of a “same procedure for all” approach. Nevertheless, substantive and material aspects are downplayed when individual assessments are replaced with general statistical assessments that are too often incorrect, and formal legal certainty is limited because decision-making steps are perceived as lacking transparency. In this sense, the decisions are neither predictable in relation to the realization of legal formulations, nor clearly and fully communicated.
Jobseekers with a complicated personal situation risk being incorrectly profiled due to the ADM tool’s lack of ability to make sufficiently individual assessments. The decision-making procedure may be uniform or “similar” and, in that sense, equal, but it generates decisions that can be questioned on the basis of the SPES regulation requiring individual action plans; the Administrative Procedure Act, which states that the principle of legal certainty is paramount over efficiency; and the Instrument of Government Act, which states that public institutions should promote individual welfare, including the right to work and employment. Theoretically, predictability within the rule-of-law context requires that citizens can predict the legal consequences of their actions. This prerequisite is challenged insofar as the ADM tool has been trained on historical data—that is, the legal consequences of a jobseeker’s actions and situation have become partially detached from the individual and are primarily related to data pertaining to other jobseekers.
Conclusions
While the implementation of ADM by welfare institutions has been encouraged based on the assumption that it strengthens public and democratic principles, this study has shown that principles such as legal certainty, impartiality, and equal treatment are given narrow empirical definitions, and the practical conditions necessary for broader definitions to be realized are theoretically limited. Future widespread use of the ADM model used in this case study would be very likely to affect how welfare rights and obligations are defined and produced in a new technological era. This, in turn, is likely to have future implications for relationships among public institutions, citizens, and the market (Carlsson and Rönnblom, 2022). Despite the politically complex policy area (Peters, 2021) of labor-market and unemployment policy and the lack of detail in the regulations, the proponents of ADM define unemployment rights and obligations from an instrumental and rational point of view.
Calculability, together with predictability, is a core value of western law (Weber, 1978), and in the case of decision-making at SPES, the accuracy and efficiency of the ADM tool’s statistical calculations is stressed as the most beneficial mechanism affecting legal public principles. However, this predictability tends to weaken due to the lack of technological transparency and flexibility. Hence, there is a tension between the meaning of predictability and the meaning of calculability, as well as a potential shift toward a higher degree of calculability and a lower degree of predictability in welfare ADM decision-making. The implementation of ADM thus indicates an intensified instrumental view of legal principles and a strengthening of western legal rationalization in which legal thought leans heavily on the value of calculability. This is important because, in this sense, the ADM can be considered an expression of changing epistemological beliefs in welfare decision-making. While individual assessments derive from qualitative professional knowledge acquired through education and work experience, general assessments are derived from knowledge defined in terms of statistical calculations. The new ways of producing a knowledge base for decision-making are beneficial to the value of rational calculability but disadvantageous in relation to analytical considerations and a comparable and equitable balancing of information. According to the results of this study, the latter kind of knowledge has proven to be essential for predictability in formal and material legal certainty. A greater degree of calculability in conjunction with a narrower definition of legal certainty might constitute a new reality for the future digital welfare state where, in addition, neither managers nor officials, i.e., the users of ADM, are fully capable of explaining the technological output. For officials, the strengthening of calculability in decision-making means that the professional judgement and discretion is circumcised, and sometimes even undermined. Furthermore, due to the weakening of predictability in decisions, there is a need for refined and clearly defined accountability relations. Since officials have limited influence over ADM decisions, they may not be held accountable to the same extent as in previous decision-making procedures.
Unemployment policies in capitalist welfare states aim to mitigate market-related problems. Still, changes in the organization and performance of unemployment policies may involve shifts in the purpose of employment services. As stated by Deflem (2020: 159), “welfare has granted effective rights to those whom the market has left behind. Yet, on the other hand, welfare laws have come about under a specific form that inherently favors the market (and the state).” The results of this study indicate that calculability at the expense of predictability is likely to weaken the protection of individual welfare in favor of the efficiency of welfare service processes. Such efficiency is expected to serve the economy and productivity of welfare institutions, as well as the private companies procured to undertake welfare service activities. However, based on the results of this study, such a change may be specifically disadvantageous to vulnerable individuals, who run the risk of being incorrectly denied welfare support while at the same time having a greater need for such support.
Footnotes
Acknowledgements
The author would like to thank the anonymous reviewers and Alexander Berman for helpful and constructive comments.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Marianne and Marcus Wallenberg Foundation grant number 2018.0116.
