Abstract
Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the
Keywords
Introduction
Increasing attention is given to artificial intelligence (AI). While the term AI is difficult to define, 1 the core concerns linked to AI are connected to automated decision-making processes: decisions that are delegated to a machine or system (AlgorithmWatch, 2019). With the rise of automated decision-making systems (Amoore, 2018), transparency 2 has become a key topic (Burrell, 2016; Pasquale, 2015). While traditional algorithms might already have challenged the notion of transparency, particularly among non-experts, AI systems relying on deep learning allow processes to run largely independently of human control (Alpaydin, 2016; Zerilli et al., 2018). As it becomes unforeseeable how such processes reach decisions, the intuitive wish for prospective and retrospective transparency arises. Prospective transparency informs users about the data processing and the working of the system upfront. It describes how the AI system reaches decisions in general. Thus, prospective transparency can be seen as an accountability mechanism (Zerilli et al., 2018). Retrospective transparency, on the other hand, refers to post hoc explanations and rationales (Paal and Pauly, 2018). It reveals for a specific case how and why a certain decision was reached, describing the data processing step by step. Retrospective transparency includes the notion of inspectability and explainability. Thus, for an algorithmic decision-making system to have retrospective transparency, one should be able to inspect its “internals,” decompose a decision to understand the structure and weighing system within the system, and ultimately explain a decision. Thus, retrospective transparency is important for audit purposes.
The goal of the article is to scrutinize the topic of transparency in AI systems from an integrated interdisciplinary perspective. While we acknowledge the growing research interest in this field and the many contributions made in recent years (Miller, 2019), our contribution provides value by synthesizing and integrating the literature across research areas, including legal, ethical, and social science perspectives. More concretely, we integrate the findings of data protection law, law and technology, robot ethics, information ethics, social media research, and human–computer interaction (HCI). By doing so, we can show tensions but also potential synergies in how transparency is approached across disciplines. This gives us the opportunity to bring different communities in conversation to each other.
Our paper is organized in a way that loosely follows a dialectical approach, 3 with a thesis that presents an explanation- and information-based view on transparency in AI, as implemented in the General Data Protection Regulation (GDPR). The anti-thesis takes a critical approach towards the explanation- and information-based view of transparency. Finally, we attempt to align the information-based view with some of the critiques it has received in a synthesis that calls for a relational approach to the study of transparency in AI.
The article is structured into four sections. Following the introductory remarks, in the next section (“Transparency in data protection law”) we explore how transparency is understood in data protection law. We show how certain ethical considerations, based on autonomy and informed consent, are implicit in data protection law. Given the current debate about the right to reasonable inferences in the context of the GDPR (Wachter and Mittelstadt, 2019), the legal analysis focuses strongly on the European context. This section, which describes the framework for transparency in AI, at least in most parts of Europe, is then contrasted with the messy reality of transparency in practice. The following section (“The limits of transparency for AI”) then explores the variety of contextual factors that transparency measures for AI need to take into account. Based on a review of research in HCI, it addresses considerations regarding the wider social embeddedness of transparency, highlighting the limitations of transparency-as-information or -explanation in an increasingly datafied world. We continue in section “Transparency as a relational concept” by integrating the information- and explanation-based view of transparency with the critical context-sensitive view, by means of understanding transparency relationally. We propose to understand transparent information provision as an act of communication between technology providers and users, where assessments of trustworthiness based on contextual factors mediate the value of transparency communications to the user. A short section with recommendations for future research on transparency in AI and for policy concludes the article.
Transparency in data protection law
Transparency in European data protection law
The origins of the transparency requirement in data protection law date to the 31st International Conference of Data Protection and Privacy Commissioners held in Madrid in November 2009, in which the importance of transparency to protect an individuals’ privacy was acknowledged. After being included in the proposal for the GDPR in 2012, the transparency principle made its way into the binding GDPR. Today, transparency is a core principle enshrined in Art. 5(1)(a) of the GDPR which states that personal data must be “processed lawfully, fairly and in a transparent manner in relation to the data subject,” thereby illustrating the close connection between transparency, lawfulness, and fairness. Art. 5(1)(a) of the GDPR, as the first of the core principles of data processing, is a “catch-all” provision, which is going to be typically called upon as a means of last resort if more concrete principles are not applicable in a specific scenario. Failing to adhere to it can be punished with steep fines (cf. Art. 84 of the GDPR).
Prospective and retrospective elements of transparency
Transparency, as understood under Art. 5(1)(a) of the GDPR, includes both, a prospective and retrospective element (Paal and Pauly, 2018). First,
Second, data protection law includes a
From a legal perspective, one question has been if a right to explanation can be inferred from the wording of Arts. 13(2)(f) and 15(1)(h) of the GDPR. These articles state that meaningful information about the logic involved as well as the significance and the envisaged consequences of such processing must be provided to the data subjects at least when such decisions produce legal effects on them or significantly affect them (cf. also Rec. 71 and Art. 22 of the GDPR). The wording “meaningful information about the logic involved” and “significance of the consequences” and “envisaged consequences” are, if any, very similar to the concept of an “explanation,” which the data subject has access to via article 15 of the GDPR (Pagallo, 2018; Selbst and Powels, 2017). Yet, it remains unclear what level of detail a “meaningful” explanation has to achieve. It is obvious that an explanation which specifies meticulously the technical processes of automated decision-making processes is unlikely to achieve the aims tied to the transparency requirement (Kuner et al., 2017). An explanation should therefore be evaluated from the perspective of the individual demanding it. Overall, an explanation “should permit an observer to determine the extent to which a particular input was determinative or influential on the output” (Doshi-Velez et al., 2017: 3). Following this definition, the information provided to users should either enable them to determine the main factors in a decision or understand how certain factors alter a decision (Doshi-Velez et al., 2017; cf. also Wachter et al., 2017; Zerilli et al., 2018).
Reasonable inferences
A right to explanation might provide an effective ex-post solution for retrospective transparency because it occurs after a system reaches a decision. However, such an explanation does not per se justify the reason why such a decision has been taken nor does it protect the user from suffering the consequences linked to that decision. Wachter and Mittelstadt (2019) argue that the current legal framework does not accurately protect data subjects from high-risk inferential analytics (i.e. privacy-invasive or reputation-damaging inferences with low verifiability, such as predictive or opinion-based inferences). Therefore, they propose to consider the “right to a reasonable inference,” which follows the idea of prospective transparency; that, before a decision is made, the data subject should have the right to require from the data controller a justification of whether an inference is reasonable. Such a right would demand the disclosure of why certain data is needed to draw an inference, why these inferences are necessary to achieve a specific processing purpose or decision, and “whether the data and methods used to draw the inferences are accurate and statistically reliable” (Wachter and Mittelstadt, 2019: 5). The right to a reasonable inference and a right to an explanation taken together would provide for overall (ex-ante and ex-post) transparency, which in turn can be seen as in line with the aim of Article 5(1)(a) of the GDPR.
Ethical underpinnings of the information- and explanation-based approach to transparency
The importance given to the information requirement, associated with transparency in the GDPR, reflects underlying assumptions about the value of informed consent for technology users. Informed consent is underpinned by an understanding of the technology user as an autonomous individual who makes their decisions independently on the basis of weighing information in light of their convictions and values. Autonomy is a foundational concept in ethics, with a rich history (Schneewind, 1998) and varied meanings (Christman, 1988, 2014; Dworkin, 1988), closely linked to a specific view of the nature of the self as independent, self-contained and internally driven, an “inner citadel” (Christman, 1988). This conception has strong roots in the enlightenment, but its adequacy has been fundamentally questioned for example in postmodern, feminist, and social constructionist thought (e.g. Benhabib, 1992; Foucault, 1979; Taylor, 1989). These positions argue that the self cannot be adequately understood without giving regard to the fundamental impact of historical, relational, and societal aspects (e.g. Marwick and Boyd, 2014).
While such critiques of the enlightenment concept of the self fundamentally question the assumption of autonomy as genuinely independent individual choice, even within a perspective that endorses the autonomous self, achieving a truly autonomy-respecting informed consent would require going beyond the minimalistic requirement of notice and consent that currently characterizes consent in the context of contemporary information technologies. Within data protection law, notice and consent refers to providing information about the envisaged data processing to an individual before the actual data processing takes place (cf. Art. 13 of the GDPR). The individual then has the option to consent to data processing on the basis of this information but must do so freely and state their choice unambiguously (cf. Art. 4(11) of the GDPR). In practice, notice and consent is generally realized through the provision, by the service provider, of statements containing relevant information, such as privacy policies, and the ticking of a box for consent by the service user. The limitations of this use of notice and consent have been widely discussed (Ben-Shahar and Schneider, 2014; Solove, 2013).
In keeping with established criteria of informed consent in ethics (Beauchamp and Childress, 2012; Faden and Beauchamp, 1986), facilitating genuine informed consent would go beyond the mere provision of information, followed by the expression of a choice. Instead, it would require the service provider to adapt such information to user characteristics and needs, to avoid carefully implicitly coercive consent contexts, and to elucidate in a user-friendly, specific, and concrete way what the system was doing. In addition, it also requires to take care to support users in achieving understanding and facilitating users’ informed reflection process, and allowing them to make decisions that reflect their wishes and values. These conditions are quite demanding even in contexts where consent is obtained through personal engagement with trained professionals and may not be met in the more restrictive impersonal settings of notice and consent, even if information on AI is provided transparently in line with legal requirements of the GDPR.
Provision of transparency also encounters further challenges due to the nature of the technologies. For the increasingly popular speech-based AI devices without primary visual interfaces, such as Alexa, even the limited requirements of the notice and consent paradigm are difficult to meet, insofar as the modality of interaction provides challenges regarding how to present relevant information to users (Hoofnagle, 2018). Even more generally, obscurity is a very common, and in some respect unavoidable characteristic of AI explanations (Brauneis and Goodman, 2018; Burrell, 2016). However, beyond these concerns that impact directly on the general question of information provision in transparency, there are significant further challenges to the practical realization of transparency which will be discussed in the following section.
The limits of transparency for AI
Transparency, stakeholders, and the implementation context
Intended as a technology-neutral piece of legislation, the GDPR’s strength lies in providing general legal requirements across technologies. However, not recognizing specific technologies and associated contexts neglects crucial elements for protecting users’ data-related rights. Transparency for AI systems raises particular challenges beyond the question of how to ensure that information is provided to the user and what information needs to be presented to users.
Transparency understanding by stakeholder (adapted from Weller, 2017).
The table suggests that the transparency requirement should be tailored to the stakeholder more broadly, including developers, users, regulators, deployers, and society in general. The work from Weller (2017), however, does not include different types of users within each stakeholder group such as secondary or disabled users. For example, with AI systems like Amazon Echo (Crawford and Joler, 2018), bystanders can inadvertently be included in the operation of the AI (Shaban, 2018) who may often have inaccurate, contextually influenced expectations on how information flows within those systems (Nissenbaum, 2011). In such cases, how can informed use on the basis of transparency be ensured for all users?
Making information open and transparent requires the individuals affected to be literate in assessing the risks of AI and automated decision-making systems and puts the onus on them to challenge automated decisions (Edwards and Veale, 2018). For users of AI-based assistive technologies with disabilities or special support needs, the technology is frequently employed in settings where multiple actors across professional and social roles, and with varied knowledge and capacity levels, interact (Kuner at al., 2017). Accordingly, such technologies require that a multiplicity of defined users need to be taken into account by the transparency specifications. Stakeholder groups differ in their ability to make use of information provided, and different types of information pose different barriers to understanding (as identified with regard to clinical populations by Tam et al., 2015; Redelmaier et al., 1993). Even more generally, research on disclosure and informed consent across practice domains has consistently shown that there are significant challenges to the effective use of information provided even for cognitively and clinically unimpaired individuals (Ben-Shahar and Schneider, 2014; Grady, 2015; Solove, 2013), limiting significantly the likely practical benefit of transparency. Therefore, attention to the specificity of the technology, the context, and the different types of users within each stakeholder group is essential for protecting users’ data protection-related rights.
The multiplicity of transparency effects: Lessons from HCI
HCI and HRI research on the outcomes of transparency on an individual level.
In the context of recommender systems, for example, transparency of music recommendations increased participants’ satisfaction with the recommendation and their confidence (Sinha and Swearingen, 2002). By contrast, Cramer et al. (2008) looked at recommender systems in the cultural heritage domain but did not find a positive effect of transparency on trust in the system. However, they could show that transparency increased the acceptance of the recommendations. This is in line with earlier findings from Herlocker et al. (2000). Kim and Hinds (2006) investigated the influence of robot transparency on credit and blame attributions but found no significant effect on the attribution of blame and credit to the robot and the participants.
Following up on these earlier studies, recent research has studied transparency and explanations in algorithms, particularly on social media. Rader et al. (2018), for example, studied the Facebook newsfeed algorithm to examine the effects of
In online advertising, transparency refers to explanations why specific personalized ads are shown to a person. Eslami et al. (2018) found that transparency needs to have the right level of specificity to enhance trust and satisfaction. Explanations that are too vague or too specific create feelings of unease and distrust. More algorithmic transparency can lead to algorithmic disillusionment, where algorithms appear less powerful and useful but more fallible and inaccurate than previously thought (see also Kizilcec, 2016). In that sense, enhanced transparency might not always be a blessing but sometimes a burden (Lim and Dey, 2009).
Table 2 presents an overview of HCI and HRI transparency research. These results are mixed, lacking a definite conclusion regarding transparency implications. While the requirement of transparency has strong ethical and rights-based support, the results from HCI and HRI research indicate that, from a pragmatic and user-centered perspective, there is no clear use case for making intelligent systems more transparent. From an industry point of view, this is the same case (Eiband et al., 2018). Investments in transparency by AI developers could be costly, while the effects and benefits are unclear and there is a risk that transparency might backfire, either because it may prioritize seeing over understanding, create false binaries, or because it results in harm (Ananny and Crawford, 2018; see also the following section). Traditional autonomy- and rights-driven demands for transparency need to contend with this.
From Table 2, it also becomes clear that most HCI and HRI studies investigating transparency outcomes were conducted in the US, which might affect their transferability to a European context. For example, it could be that making assistive robotics more transparent would lead to positive outcomes in European countries with a strong trust and transparency culture (e.g. in Northern Europe), but might not have as much of an effect or be even detrimental in societies with less institutional trust and transparency. Furthermore, as Ausloos et al. (2018) note, such research has been mostly unconnected to legal considerations. As discussed, the GDPR comes with new transparency requirements that might clash with established transparency practices and lead to unintended consequences. These concerns are underexplored in HCI and HRI, and the lack of clarity about the implementation of the GDPR transparency requirements calls for more interdisciplinary collaboration between HCI researchers and legal scholars (Ausloos et al., 2018).
The performance of transparency: Organizational and societal aspects
The implications of transparency should be considered not just with regard to human interaction with specific technologies and their contexts of use, but also from a broader theoretical and normative perspective (Miller, 2019) that considers how transparency practices are embedded into wider organizational and cultural contexts. As work in critical algorithm studies has pointed out, transparency practices do not take place in a social vacuum but play particular roles in their specific cultural and organizational settings (Beer, 2017; Kemper and Kolkman, 2018). It has been argued that algorithms should not merely be seen as “objects to be known through observations” (Ziewitz, 2017: 3) but as “only [to] be evaluated in their functioning as components of extended computational assemblages” (Lowrie, 2017: 1). In that sense, algorithms are intimately linked to practices of sense-making, highlighting the trickiness of the “nuts and bolts of how to work with them” (Thomas et al., 2018: 2). As Seaver (2017: 1) argues, algorithms can be understood “as culture,” as “heterogenous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas.”
Albu and Flyverbom (2019) in summarizing the literature on organizational transparency differentiated two broad approaches: transparency as verifiability and transparency as performativity. The first approach understands transparency as the disclosure of information. Transparency as outlined in Section 2, with regard to its understanding in the GDPR and in the tradition of informed consent, aligns with this approach. Following this understanding, organizations and institutions are transparent when they release information about their internal practices, for example, their data collection and data analysis. In the context of AI, an example would be a shopping mall that announces at the entrance and on its website whether it uses facial recognition technology to track shoppers, rather than keeping this information hidden (Rieger, 2018). The second approach, however, looks at the tensions, struggles, and discourses inherent in transparency projects, and at unintended consequences and downsides of transparency. Following this approach, transparency should be understood more holistically, including the socio-material and ritualistic practices of organizations when they “perform” transparency. The performativity perspective understands transparency practices as social and organizational phenomena whose meaning goes substantially beyond the information conveyed. Albu and Flyverbom (2019) illustrate the dual nature of transparency with regard to the Snowden disclosures. They highlight that while the disclosed information on the secret US surveillance programs was the focus of attention in public reception, disclosures were taking place embedded in organizational contexts, involved curation by other professionals, and were performed with certain strategic intentions, making it more appropriate to consider them as “complex and dynamic communication processes rather than simple and straightforward transmissions of information” (p. 283). Similarly, technology companies such as Facebook or Google employ strong narratives of openness, connectedness, and sharing on the user side while being highly secretive themselves (Van Dijck, 2013). For instance, a review of Google’s privacy policy shows a combination of an abundance of highly specific and detailed information on types of information collected, partly presented in a very user-friendly manner, alongside extremely vague general (and practically meaningless) statements about the purpose of data usage, presented generically in terms of improvement of user experience. In that case, transparency as disclosure is evident in the detailed insight allowed into some elements of their data collection practices, while at the same time transparency also appears as occluding performativity, where selective disclosure around data use seems designed to occlude their potential scope and problematic nature (Zuboff, 2019). Relevant research also reflects this distinction between verifiability and performativity: studies applying the transparency as verifiability approach tend to find positive outcomes for organizations, for example, positive effects on organizational trust, while some studies within the transparency as performativity approach reveal how transparency can also undermine trust (Albu and Flyverbom, 2019).
In a similar vein, Ananny and Crawford (2018) state that transparency can intentionally occlude, for example, when so much information is strategically disclosed that it is impractical or impossible to sift through by a layperson (needle in the haystack problem). An example is the option that companies such as Google and Facebook provide to download the personal information collected about an individual user. While this potentially enables users to see what is collected about them, the data can be too large and not formatted in a way that they can access and understand it (Curran, 2018). While the GDPR seemingly prevents such practices, as the explanations in recital 58 imply, the formulations still leave ample room for interpretation. The needle in the haystack issue could become an even bigger problem with cloud robotics and Internet of things devices, where the data collected about a user and its interactions are more complex and harder to convey. Thus, it is crucial not only to consider the disclosed information but also the effort, skills, and requirements needed to decode and interpret the information (Kemper and Kolkman, 2018), or in other words the information and privacy literacy demands on the user side (Bartsch and Dienlin, 2016), including the way in which disclosed information is embedded in other practices that may support or hinder its use.
Finally, transparency may be practically inert due to the embeddedness of the technology in a wider network of devices. For large technology companies, such as Google, Apple, or Amazon, which offer increasingly interconnected suites of complex AI services across life spheres, refusing consent to particular elements may not be an option. Even if users disagree with particular elements, once a technology provider has been chosen for the majority of their devices, these users are locked-in. This is the case because refusal on the operation of one part of the system may significantly impair the overall functionalities of the system. Moreover, high switching costs, a lack of functional interoperable alternatives, and the fact that AI systems are increasingly becoming part of our daily infrastructure (West, 2019) mean that users are in a structurally disadvantaged position, with little agency to make demands (Draper and Turow, 2019). Along these lines and based on approaches from glitch studies, Kemper and Kolkman (2018: 3) argue that “transparency of algorithms can only be attained by virtue of an interested critical audience.”
Transparency as a relational concept
We have approached the topic of transparency in AI from a dialectical perspective. Our goal was to provide an integrated interdisciplinary discussion, where legal considerations from the GDPR are contrasted with considerations informed by the social sciences and related to their respective ethical underpinnings. In this final section, we intend to bring together the insights from the information- and explanation-based perspective outlined in Section “Transparency in Data Protection Law” with the critical social science perspective outlined in Section “The limits of transparency for AI” by outlining elements of a relational approach to transparency.
We started by conducting an in-depth analysis of transparency in data protection law, particularly within the GDPR. The discussion identified legal requirements of transparency as well as the ethical underpinnings of these transparency requirements in the GDPR, showing critical relations between transparency, informed consent, and a specific underlying understanding of individual autonomy and meaningful human agency. According to this understanding, developers of the systems should inform the users about the presence and underlying logic of AI-based decisions to give the possibility of informed consent, with the GDPR specifying
We then highlighted the insensitivity of the GDPR to the relevance of technological and social contexts in which AI is embedded. We proposed a tailored and multi-stakeholder approach to transparency for AI that is supported by HCI and HRI research. The analysis of empirical studies on user perspectives showed inconclusive evidence on the overall effects of transparency. We then discussed the embeddedness of AI and associated transparency practices in wider organizational and cultural contexts. Following Albu and Flyverbom (2019), we explored performativity as a potentially fruitful way of conceptualizing the close link between transparency effects and contextual factors. We think that this approach does justice to the complexities and tensions that may arise when transparency is enacted in practice (Ananny and Crawford, 2018).
In the information-based approach, the user is conceptualized as an independent actor, who makes autonomous decisions on the basis of information made available to them through transparency. By contrast, the performativity account sees contextual social factors as considerably determining the meaning of transparency practices. We propose to bring insights from both perspectives together in a relational approach to transparency that draws on the concept of trustworthiness, where transparency is understood with regard to its relational function, as a signal of trustworthiness and willingness to be accountable to those affected by one’s actions or products.
Trustworthiness and transparency are frequently considered together (Mittelstadt et al., 2016). In the organizational literature, trustworthiness has been closely linked to transparency in recent years (Grimmelikhuijsen and Meijer, 2012, Schnackenberg and Tomlinson, 2016). However, it has been questioned how closely transparency is linked to trust. As our review of HCI research indicates, trust is not a simple consequence of transparency. It has been argued by Heald (2006) that transparency is only valuable instrumentally, as means to achieve a potential multitude of other more fundamental values, including trust, and that the value of transparency depends on the achievement of these more fundamental values. O’Neill (2002, 2003, 2009) argues that the value of information provision should not be reduced to the value of the informational content itself but that it lies in the relational function of the communicative action of the information provision; transparent information provision can reassure the other party that they are not being deceived or coerced. While the availability of information is important for trust, the relational context provides the wider frame within which the information itself may be valued in different ways.
In the philosophical debate, trust has been analyzed relationally as an attitude of optimism towards others, assuming their goodwill, when we rely on them in the face of uncertainty and risk of exploitation (Baier, 1986; Jones, 1996; Potter, 2002). Trust is inherently cooperative and contextual, in that many things that we value can only be realized through depending on others and only under particular conditions. However, responsible trust requires reflection on others’ trustworthiness, assessing whether there is sufficient evidence to assume that these agents are indeed worthy of being trusted. Truthfulness, lack of exploitation of vulnerabilities of the dependent party, the constructive contribution to expected benefits, and the willingness by the trusted party to be held accountable are the most salient criteria for trustworthiness that can be derived from that literature. Depending on the nature of engagement and communication between the trusting and trusted parties, the specific vulnerabilities and potential harms and benefits, what exactly it takes to be deemed trustworthy may look quite different between cases.
Potter (2002) suggests that understanding trustworthiness requires the use of a virtue ethical framework by which the reliability of dispositions of those we are relying on can be judged. Accordingly, trustworthiness can be established based on stable and effective patterns of behavior that indicate that the person or organization that is being trusted deserves this trust. The extensive consideration of wider patterns of behavior, beyond the momentary provision of transparent information on specific aspects of services, is essential for such an assessment. This can take the shape of an investigation of historical patterns in the actions taken by organizations, as exemplified in Zuboff (2019). As Zuboff argues, pervasive patterns of lies, manipulation, breaches of commitments and the hidden exploitation of users by big technology companies belie their official public statements of good will and occasional gestures of transparency.
The importance of truthfulness, supportiveness, stability of disposition, and accountability in ascribing trustworthiness to a person, entity, or technical system is also evident in the recent statement of the European Commission’s High-Level Expert Group on AI (HLEG AI, 2019), whose
Evidence of trustworthiness of complex interactive information systems includes, for instance, technical safety, operational reliability, and the coherence of the system’s behavior with its stated purpose (Hancock et al., 2011; Salem et al., 2015). Efforts by organizations targeted at achieving transparency about a product or service indicate to customers that they are not afraid to provide the subject with detailed information. The relational message that this sends to the subject is one of willingness to be accountable, a core indicator of trustworthiness. The apparent willingness of providers to be genuinely transparent towards their users serves as a base for perceptions of trustworthiness (Kizilcec, 2016). While achieving a full understanding of information technologies is typically difficult due to their complexity (Hayes and Shah, 2017), transparent explanations of, for example, reasons for robot behavior, can contribute to an increased perception of their trustworthiness (Korpan et al., 2017; Ribeiro et al., 2016). In contrast, where opacity is present, the risk of remaining uninformed and potentially being deceived or exploited remains salient for the user, and continuing opacity, especially if clarifications have been requested, might indicate a lack of concern for the establishment of trustworthiness vis-a-vis the subject. As Burrell (2016) states, opacity in information systems can be either intentional, by keeping specific information secret, or unintentional, for instance, due to the lack of technical literacy; how such opacity is perceived may be mediated by attitudes of trust. One complication with regard to complex information systems is that some degree of opacity can be systemic and resistant to attempts at transparency, especially when the use of machine learning algorithms makes deductive explanation impossible (Burrell, 2016; Van Opdorp et al., 1991). This means that users need to be realistic in their expectations of transparency and careful in judgments with regard to what constitutes non-trustworthy, culpable opacity.
However, in addition to what is required by service users and service providers, users also need to be supported by systems of accountability (O’Neill, 2014). The willingness of organizations to be accountable for their services is often seen as relational underpinning of the value of transparency; accountability was also included as one core criterion in the HLEG AI (2019). However, meaningful accountability requires significantly more than mere transparency. Accountability extends to managerial accountability within organizations, but also requires the existence of effective external systems of accountability. As O’Neill (2014) highlights, achieving accountability might rely “on democratic or corporate forms of governance, or on legal, financial or professional forms of accountability” (p. 177). Reliance on democratic and legal forms of accountability which operate from outside of organizations themselves is particularly relevant to achieve effective accountability of organizations towards their service users, given their comparative lack in power. The state’s effectiveness in ensuring its citizens’ rights through means of regulation and legislation, such as the GDPR, and associated enforcement activities, grounds not just the state’s own trustworthiness but will also determine whether citizens can trust transparency expressions of service providers. In order for the GDPR transparency requirement to fulfill this trustworthiness function, greater clarity will need to be developed regarding what constitutes appropriate implementations of transparency. In the absence of effective and clear regulation and enforcement, the onus is on the service user to engage critically with transparency expressions and ascertain the trustworthiness of organizations, opening up greater risks of misunderstanding and performative manipulation.
Conclusion
To conclude, more multidisciplinary research is needed to implement the legal transparency requirements into technical systems. Studies in the area of algorithm audits have provided essential insights into the technical workings of AI-powered, black-boxed systems, showing problematic implications, for example, in terms of bias (Chen et al., 2015; Sandvig et al., 2014; Venkatadri et al., 2018). Another approach aiming towards better transparency of machine-learning algorithms is the What-If Tool, an open-source TensorBoard web application that enables users to analyze machine learning models. These models can point out inference results and explore counterfactual explanations without the need for coding (Wachter et al., 2018). Such attempts show that multidisciplinary collaborations between engineers, social scientists, lawyers, philosophers, and ethicists could lead to the implementation of the transparency requirement from the very design of concrete technology and bring about the materialization of transparency-by-design.
Our reflections point to a need for more critical research on AI, with a view to the relational understanding of transparency. Case studies and ethnographic analyses could inform the lived realities of transparency, for example, how companies use transparency as a selling point and how users (fail to) engage with transparency for self-reflection, self-enhancement, or as a means of communication. Particular attention should be paid to factors that make transparency meaningful and trustworthy in the users’ eyes.
Policymakers should assess the usefulness and limitations of the current transparency regime. They should be aware of the performative aspects as well as the dilemmas and constraints consumers of AI face (e.g. Draper and Turow, 2019). In that regard, more meeting spaces could be created, where policymakers are exposed to the voices of user-centered and critical researchers on transparency understandings and demands.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Part of this project was funded by the LEaDing Fellows Marie Curie COFUND fellowship, a project that has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 707404. In addition, the Research Council of Norway (Grant Agreement Nos. 247725 and 275347) has generously supported the third author’s research.
