Abstract
The editorial sets the stage for the special issue on algorithmic transparency in government. The papers in the issue bring together transparency challenges experienced across different levels of government, including macro-, meso-, and micro-levels. This highlights that transparency issues transcend different levels of government – from European regulation to individual public bureaucrats. With a special focus on these links, the editorial sketches a future research agenda for transparency-related challenges. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others. Finally, this introduction present an agenda for future research, which opens the door to comparative analyses for future research and new insights for policymakers.
Keywords
Introduction
Previous research shows that machine-learning technologies and the algorithms that power them hold huge potential to make government services fairer and more effective, and ‘freeing’ decision-making from human subjectivity (Margetts & Dorobantu, 2019; Pencheva et al., 2018). Algorithms today are used in many public service contexts, from health care to criminal justice. For example, within the legal system algorithms can predict recidivism better than criminal court judges (Kleinberg et al., 2017). Such changes have a profound impact on public administrations as they increasingly use algorithms for decisionmaking and service delivery. Scholars have argued, that the introduction of algorithms in decision-making procedures can cause profound shifts in the way bureaucrats make decisions (Peeters & Schuilenberg, 2018; Van der Voort et al., 2019), and that algorithms may affect broader organizational routines and structures (Meijer & Grimmelikhuijsen, 2021).
At the same time, critics highlight several dangers of algorithmic decision-making. They argue that algorithms can produce decisions or recommendations that disproportionately negatively affect disadvantaged groups and that they are simply not accurate in their predictions (Boyd & Crawford, 2012; Barocas & Selbst, 2016; Giest & Samuels, 2020). For example, several local governments in the Netherlands implemented a system that used a machine-learning algorithm (Systeem Risicoindicatie, SyRI) to detect potential welfare fraud amongst benefit recipients. Privacy activists objected to this system and in 2020 it was judged in a court of law to be ‘discriminatory’ and ‘not transparent’. Here, vulnerable citizens, often with a migrant background, were unfairly profiled and suspected of welfare fraud (Huisman, 2019).
A major issue underlying this machine bias is the lack of algorithmic transparency (Janssen & Van den Hoven, 2015; Kroll et al., 2016). Algorithms – and the datasets that feed them – tend to suffer from a lack of accessibility and a lack of explainability (Lepri et al., 2018). This is due to the fact that algorithms used in government decisions are often inaccessible because they are developed by commercial parties that consider them intellectual property (Mittelstadt et al., 2016). Machine-learning algorithms further produce decisional outcomes that may not make sense to people (Burrell, 2016). This so-called ‘black box problem’ of algorithmic decision-making makes perceived bias or unfair algorithms hard to scrutinize and contest. This is problematic because when public servants make decisions based on these non-transparent algorithms, this comes at the very real risk of making (unintentional) biased decisions which could erode public trust in government (Whitakker et al., 2018). In short, for automated decision-making explainability and accessibility are prominent challenges when tasks are delegated to complex algorithmic systems (Ananny, 2016; Sandvig et al., 2016; Giest, 2019; Dencik & Kaun, 2020).
To date, various conceptual pieces have been published that highlight the potential impact of algorithms on the way government works (e.g. Agarwal, 2018; Pencheva et al., 2018) and on decision-making of individual bureaucrats (Young et al., 2020). At the same time these authors call for more transparent and accountable algorithms (e.g. Busuioc, 2020). It is time to complement these critical reflections and conceptual analyses with empirical research in the field of public administration to gain a better understanding of how algorithms and algorithmic transparency affect governmental routines and decisions.
Thus, as scholars of public administration, there is a need to ‘get up to speed’ with the rapid rise of algorithms in government decision-making. Such analyses can be sobering and indicate that some of the more sweeping claims surrounding algorithms need a reality check. For instance, Van de Voort et al. (2019) highlight, using case study research, that data analysts who provide algorithmic predications have limited influence and are often simply bypassed by policy makers. However, how algorithmic transparency affects public administration – although it is often touted as important – is rarely investigated.1
There have been (empirical) investigations but so far these are not applied in a public sector context (e.g. Kizilcec, 2016).
This special issue on algorithm transparency addresses this imbalance and presents six excellent contributions to sharpen our conceptual and empirical understanding of algorithms in government. Furthermore, the issue includes a very insightful book review of Yeung and Lodge’s edited volume Algorithmic Regulation (2019). The articles vary in terms of their theoretical perspective and methodology, but overall, they show the importance of a multi-level perspective on algorithm-use in public administration.
In this introductory article we set the stage for the special issue on algorithmic transparency in government. From a bird eye’s point of view there are two main take-aways from the six contributions to this special issue. Firstly, they paint a nuanced and perhaps a slightly more optimistic picture of algorithmic decision-making and transparency in government. Algorithmic decision-making does not have to be opaque and does not unequivocally lead to diminished decisional discretion. Machine learning algorithms can even be enablers of transparency in government decisions. Secondly, the articles in this issue bring together transparency challenges experienced across the different analytical levels of government, namely the macro-, meso-, and micro-levels. This highlights the importance of looking at government algorithm-use from various levels in order to uncover the connections between the national level contexts, institutional structures, organizational policies as well as the behavior of individual bureaucrats. With a special focus on these findings, the editorial sketches a future research agenda for transparency-related algorithmic challenges
The current landscape covering algorithmic applications in government is spread across different fields of research that range from legal dimensions related to European GDPR regulations (e.g. Goodman & Flaxman, 2017; Kaminski & Malgieri, 2020) to social psychology aspects surrounding individual agency on the receiving end of automated decisions (e.g. Wagner, 2019; Gran et al., 2020). These publications are often disjointed in separate journals and fields. In this issue, we offer a view on the different levels involved in these dynamics. The articles address transparency challenges of algorithm-use at the macro-, meso- and micro-level. This distinction is a common way to classify research in public administration and management and entails the specific level of analysis of a study (e.g. Jilke et al., 2019; Roberts, 2020). Building on this work this can be linked to algorithms and algorithmic transparency as follows:
The macro level describes phenomena from an institutional perspective: which national systems, regulations and cultures play a role in algorithmic decision-making? For example, Peeters and Schuilenberg (2018) analyze how predictive algorithms transform existing norms and rules in the justice system. The meso level mostly pays attention to the organizational and team level: what kind of organizational and group strategies are developed and how do government organizations deal with algorithmic transparency in general? One example is a chapter by Meijer and Grimmelikhuijsen (2021) on “algorithmization” in which the authors argue that the introduction of a new algorithm in a government organization triggers various dimensions of change in organizational policies, structures and informational relations. The micro level focuses on individual attributes, such as beliefs, motivation, interactions and behaviors. For instance, how do individual street-level bureaucrats use algorithms and do transparent algorithms lead to different/better decisions by these bureaucrats? For instance, Zouridis, Van Eck and Bovens (2020) study algorithmic discretion of street-level bureaucrats and find that discretionary power does partially shift to software and IT developers. However, this depends on the level and type of automation in the specific decision-making context.
All three levels of analyses are covered in the special issues on algorithmic transparency. We argue that 1) there are specific transparency challenges at each level and 2) future research needs to look into the interactions between the three levels.
The macro-level describes algorithms and algorithmic transparency in a European and national regulatory context.
First,
Secondly,
Meso-level contributions
For the meso-level, we include organizational and managerial strategies with regard to algorithmic transparency.
The article by
Micro-level contribution
Finally, the micro-level addresses behavioral dimensions of bureaucrats working with algorithmic systems.
The article by
Another article that focuses on the micro-level dynamics of algorithmic systems is by
Finally, the book review by
Table 1 highlights the transparency challenges and future research questions in line with the levels identified.
Overview contributions specified by level of analysis
Overview contributions specified by level of analysis
The contributions in this special issue shed light on different dimensions of algorithmic transparency, which range from the regulatory context at the European and national level, to organizational policies and discretionary spaces of individual bureaucrats. What becomes obvious in this discussion is that the different levels, the macro-, meso-, and micro-levels are linked and need to be viewed in coherence to achieve more transparency for algorithm use in government. For instance, bureaucratic behavior is not only determined by the algorithmic tools they work with, but also by the regulatory and organizational context as well as their personal experience and the training provided.
Overall, these contributions paint a nuanced and perhaps a more optimistic picture of algorithmic decision-making and transparency in government. Processes of algorithmization do not have to lead to opaqueness nor do they have to lead to less discretion and programmed bias. Algorithmic tools can even be used to create more transparency of otherwise unclear decision-processes. Furthermore, the authors in this issue offer ways forward to improve algorithmic usage and increase transparency. Here, it is important to address these issues at all levels of analysis, if we are to provide accountable oversight of algorithmic decision-making and to use algorithms in the most effective way. Building on the contributions in this special issue we identify the following key areas for improvement in Fig. 1.
Areas for improvement of algorithmic transparency and decision-making.
The contributions in this special issue also yield new important research questions. We argue that future research should focus on two broad areas. First, we need to flesh out the effects, dynamics and determinants of algorithmic decision-making and transparency at all levels of analysis, in order to get a better understanding of how algorithms have evolved in government and public service settings. Important questions are:
What are enabling and constraining factors in the regulatory and institutional environment (macro) surrounding the deployment of algorithms? Which organizational tools and strategies will enable proper scrutiny of algorithmic transparency (meso)? What behavioral tendencies affect algorithm use and how does algorithmic transparency trigger the desired behavioral response amongst bureaucrats and how will it affect citizen attitudes towards government (micro)?
Secondly, the contributions in this issue indicate that the dynamics between the three levels need to be better understood. For instance, we know that individual behavior is constrained and enabled by regulatory context and existing administrative norms. Future research could look into how these macro-level factors shape the use of algorithms. At the same time micro-level decisions will slowly but surely cause shifts at the macro level. For instance, algorithmic transparency in government decisions will lead to different expectations of algorithms usage amongst bureaucrats and citizens. These expectations may result in new norms and a change in regulatory frameworks. How and whether such shifts take place is subject to debate and needs further empirical investigation. Figure 2 highlights the potential relations among all levels.
Studying algorithms and algorithmic transparency from multiple levels of analyses.
These research questions provide ample material for future research by raising questions around the interplay of regulatory and organizational contexts as well as the intersection of bureaucrats and algorithms where research on built-in transparency mechanisms is lacking. In some ways this resembles research on government transparency in general. Scholars find different dynamics and effects of government transparency depending on the institutional and policy context (e.g. Meijer, 2013; De Fine Licht, 2014; Grimmelikhuijsen et al., 2020). It is likely that a similar contextual approach is needed to understand algorithmic transparency. Ultimately, the costs and benefits of transparency will vary across policy fields, algorithmic or AI decision-making type as well as the organizational features in which such systems are embedded. To advocate the responsible and transparent use of algorithms future research should look into the interplay between these different levels and contexts.
It is clear that the use of algorithms in the delivery of government services is an ongoing development which will expand in the next few years and it is critical that these developments are embedded in democratic norms of accountability, trust, oversight and transparency. To do so we need to analyze algorithms from multiple levels.
Footnotes
Acknowledgments
We would like to thank the editors-in-chief of Information Polity for making this special issue possible in the pages of the journal. We are also grateful for their guidance and feedback along the way. Furthermore, thank you to all the reviewers who provided excellent and elaborate feedback. Their time and effort in these already stressful and busy times is very much appreciated.
