Abstract
Standards are put forward as important means to turn the ideals of ethical and responsible artificial intelligence into practice. One principle targeted for standardization is transparency. This article attends to the tension between standardization and transparency, by combining a theoretical exploration of these concepts with an empirical analysis of standardizations of artificial intelligence transparency. Conceptually, standards are underpinned by goals of stability and solidification, while transparency is considered a flexible see-through quality. In addition, artificial intelligence-technologies are depicted as ‘black boxed’, complex and in flux. Transparency as a solution for ethical artificial intelligence has, however, been problematized. In the empirical sample of standardizations, transparency is largely presented as a static, measurable, and straightforward information transfer, or as a window to artificial intelligence use. The standards are furthermore described as pioneering and able to shape technological futures, while their similarities suggest that artificial intelligence translucencies are already stabilizing into similar arrangements. To rely heavily upon standardization to govern artificial intelligence transparency still risks allocating rule-making to non-democratic processes, and while intended to bring clarity, the standardizations could also create new distributions of uncertainty and accountability. This article stresses the complexity of governing sociotechnical artificial intelligence principles by standardization. Overall, there is a risk that the governance of artificial intelligence is let to be too shaped by technological solutionism, allowing the standardization of social values (or even human rights) to be carried out in the same manner as that of any other technical product or procedure.
Introduction
Standardization plays a vital role in our current societies. The use of a laptop is dependent on a myriad of different actors’ adherence to technical standards, based on ideals of interoperability, uniformity and the prevalence of universal goals. Even a person's survival could depend on the standardization of a disease classification (Bowker and Star, 1999) or a bystander's ability to carry out the standard for cardiopulmonary resuscitation (Lindh, 2015). Presently, the development of standards pertaining to artificial intelligence (AI) is rapidly evolving (Ebers, 2022). This article is thus motivated by the recent surge of calls for standards to turn principles of ethical, responsible or human-centred AI into practice.
The hope put into standards as means to govern AI, is in great need of more critical consideration. In the EU proposal for AI regulation (AIA) standards are set out to play an important role (European Commission, 2021). The expectation is that harmonized standards will facilitate ethical and responsible development and use of AI. However, that expectation is argued to be built on somewhat shaky ground and to delegate too much rule-making to private standardizing bodies (Veale and Borgesius, 2021). A position paper by the European Association for the Coordination of Consumer Representation in Standardization (ANEC) furthermore argues that the AIA places too great regulatory power in standards: As currently written, the use of standardisation as proposed in the AI Act is not appropriate for upholding fundamental rights. Harmonised standards should not be used to define or apply fundamental rights, legal or ethical principles (ANEC, 2021: 2).
This article argues that the ideas and processes making up AI transparency standards need to be understood as formed within sociotechnical assemblages (Orlikowski, 2010; Seaver, 2018), in which the social, technical and material are entangled and co-constituted. In addition, standards can represent ideas of desirable AI futures, by ‘visions of connected social and technological orders’ with the ‘possibility of shaping terrains of choices, and thereby of actions’ (Sismondo, 2020: 505).
By combining a theoretical exploration of AI transparency and standards with an empirical investigation of a set of standardizations of AI transparency, this article aims to analyze efforts to stabilize translucencies of AI. This conceptualization draws from Lee's (2021) methodological rules for analyzing the politics of algorithms, which implies attending to ‘multiple and situated translucencies’ (Lee, 2021: 67) of algorithms, instead of treating algorithmic transparency as a binary state of either full transparency or opaqueness. The emphasis on translucencies has as its main point, in this article, to trouble the idea of a full, see-through, enlightening transparency.
The article is structured as follows. The first section is a theoretical exploration of the concepts of AI transparency and standards/standardization, while the second section is an empirical analysis of a sample of standardized forms of AI transparency. This dual foundation forms the basis for the analytical findings subsequently presented and discussed. In the concluding section, I discuss the implications of the tension between the concepts and of standardized AI transparency. The exploration starts by turning to the notions of AI transparency.
AI and the phenomenon called transparency
For a conceptual understanding, the idea of transparency and how it applies to AI needs unpacking. The logic behind transparency, as a concept and metaphor, is that what can be seen, can be known, and that we by observation are able to reach new insights (Ananny and Crawford, 2018). The EU Ethics Guidelines for Trustworthy AI defines transparency as an umbrella term encompassing communication, traceability and explainability (High-Level Expert Group on Artificial Intelligence, 2019). In ethical guidelines for AI, transparency is a highly prevalent principle (Jobin et al., 2019). It is a multifaceted, broad, and often vague concept (e.g., Larsson and Heintz, 2020; de Vries, 2021). There are, however, high hopes associated with AI transparency. It is expected to enable evaluation of accuracy and fairness and facilitate accountability by trust- and risk assessments. Furthermore, it is supposed to mend causes for opacity, including intentional secrecy, technical illiteracy, and epistemic or explanatory constrains posed by the complexity and scale of AI operations (Burrell, 2016). As such, AI transparency represents an important key to open up the so called ‘black boxes’ within the ‘black box’-society (Pasquale, 2015).
While acknowledging that increased transparency will not necessarily, on its own, solve issues of bias or errors, it can also be motivated for dignitary or justificatory reasons. It includes revealing structural problems of discriminatory biases and decisions based on ‘unacceptable’ grounds, as well as enabling oversight and providing a possibility to challenge automatic decision-making (Kaminski, 2020: 122–123; de Vries, 2021: 126). To a high degree, transparency is presented in a deterministic fashion as a goal with positive outcomes. This conviction has been problematized by studies of transparency as a sociopolitical ideal and instrument for empowerment and trust (e.g., Strathern, 2000; Tsoukas, 1997; Meijer, 2015). When utilized for political or economic reasons to steer gaze, or as a mean for commodification or surveillance, transparency can have negative side effects (e.g., Mulinari and Ozieranski, 2022).
Transparency is furthermore frequently treated as a binary state of either full transparency or opacity, where agencies can be pinpointed and power relations are stable (Lee, 2021: 67). The access to information is commonly conflated with knowledge, denoting that if more information is available, it will by necessity result in increased insight (Tsoukas, 1997). A presumption is that meaningful information is uncovered, but what constitutes information has been subject to much scholarly consideration. For example, Buckland (1991) argues that information could be understood as at least threefold; as the process of getting informed, as the knowledge that is communicated, or as the intangible or tangible, entity that is informative. With regard to calls for AI transparency, little conceptual clarity is provided regarding the nature or meaning of information.
As laid out by Seaver (2019), to fixate on access to information and expertise risks causing a deficient understanding of how algorithms actually work ‘in the wild’. In relation to digital technologies, transparency poses no clear revelation of a full truth or of openness, argues Flyverbom (2019). Rather, it is a directed sight, conveying as well as concealing information, constituting an active management of visibilities. This also highlights the conundrum of who transparency serves and for whom AI is to be made transparent (Kemper and Kolkman, 2019). In response, there is an emerging multidisciplinary field of critical transparency studies. It pays attention to matters such as the politics and aesthetics of transparency (Alloa, 2022), and how transparency is a site of conflict with regards to truth and legitimacy (Koivisto, 2022). Moreover, there are advances to understand and ‘de-trouble’ the issue of AI transparency in different contexts, by relational and situated understandings (Winter and Carusi, 2022; Felzmann et al., 2019), and find ways in which transparency can be made socially meaningful (Bates et al., 2024).
Moreover, algorithmic transparency is predominantly treated as a static state, yet it is performative and processual (Cellard, 2022) as well as entangled with the sociotechnical systems, infrastructures, and contexts that AI systems are part of, forming ‘a dynamic sociomaterial configuration performed in practice’ (Orlikowski, 2010: 136). From a relational viewpoint, an increased understanding is needed of how social and natural orders are made by bringing together values, computations, datasets or analytical methodologies by the use of algorithms (Lee et al., 2019: 3). This complexity also points to the challenges of reaching full revelations of the impact of AI-technologies in meaningful ways, due to the assemblages they are part of and how the technologies work: ‘instead of formality, rigidity, and consistency, we find flux, revisability, and negotiation’ (Seaver, 2019).
In sum, there are high expectations of what can be accomplished by AI transparency, but it is difficult to distinguish by which actions and practices the ideal is to be realized. The hopes risk being inflated, as meaningful transparency, is contingent on power dynamics and relational actions within multiple sociotechnical constellations. Thus, the question is: how to govern such a thing as AI transparency?
Standards as governance
To approach the matter of governing AI transparency by standardization, we must first consider what ideas that standards, and the act of standardizing, are built upon. Standardization is a way to regulate social life (Timmermans and Epstein, 2010). It is a social process and an attempt to reach stable uniformity across space and time (Bowker and Star, 1999). While the term standard has multiple meanings, it refers to something of exemplary measure, and the rules and norms for how that ideal can be reached (Busch, 2011: 17–25). The expected end result is safe, reliable, and good quality of services and technologies, by following agreed upon requirements for products or processes (Fried and Glaa, 2020). Yet perhaps, are standards even more immersive, shaping ‘not only the physical world around us but our social lives and even our very selves, [as] recipes by which we create realities’ (Busch, 2011: 2).
Standardization is moreover a way to steer behaviour. In that regard, standards constitute a form of governance commonly classified as soft law. By its adaptability across sections and borders, soft law is claimed to be the solution to the pacing problem of new technologies that are developing faster than regulation can keep up (e.g., Marchant, 2021; Jobin et al., 2019). However, in comparison to hard lawmaking, formal standards are typically made by private organizations, without democratic representation, and only a few come with legal requirements for compliance (Bowker and Star, 1996). Furthermore, many standard-developing organizations only allow access to standards upon purchase. Standardization is nevertheless a common regulating practice to govern technology and implement harmonization (Jobin et al., 2019: 396), and is argued to be an important piece of the AI policy ecosystem (e.g., Shank, 2021). Standards can moreover be regarded as bridges to ‘mundane governance’, in which policy comes alive in everyday practices of people, organizations, and things (Woolgar and Neyland, 2013).
For something to be measured against the same criteria, it does, however, require likeness amongst the entities evaluated (Busch, 2011; Bowker and Star, 1996). Yet, a standard in use could potentially encompass a diversity of processes and end-results, as local adaptions and tinkering occur (Fried and Glaa, 2020; Lindh, 2015). While striving for clarification, specification and universality, Timmermans and Berg (1997) argue that what standardization accomplishes is often a local universality. Standards still have the potential to facilitate cooperation between different actors and contexts, by working as boundary objects that are stable enough to work as uniting entity, as well as adaptable enough to be applicable in diverse settings (Star and Griesemer, 1989).
Additionally, standards are not neutral simple straightforward instruments, as they: Are nested inside one another. Are distributed unevenly across the sociocultural landscape. Are relative to communities of practice; that is, one person's well-fitting standard may be another's impossible nightmare. Are increasingly linked to and integrated with one another across many organizations, nations, and technical systems. Codify, embody, or prescribe ethics and values, often with great consequences for individuals. (Lampland and Star, 2009: 5)
Moreover, standards are commonly specified as either de jure standards, which are requirements assembled in formal standard documents, endorsed by standard organizations, or de facto standards that are not formally recognized by a standardizing body but adopted as standard practice (Backhouse et al., 2006). Values and principles, such as privacy, have also long been engineered into systems with the help of standards (Rommetveit and van Dijk, 2022). However, norms, common practices and ideals can also form standardization, constituting a standard way of action or conventions of technologies and processes (Star, 1990). Examples closer to the latter, are the public registers of AI use or algorithmic systems, that are establishing what information is to be provided and how. Even as some see great potential in these public registers as AI transparency solutions (Floridi, 2020), this ‘governance-by-database’ is argued by Jansen and Cath (2021) to be a partly overhyped way to ensure accountability. This is argued to be due to how the registers, by their opt-in and ex-post principles, bypass discussions about what is missing, who is steering the conversation, and whether the AI system should be implemented in the first place.
Concludingly, in this theoretical exploration I argue that standards are shaped by, perform, and generate, relations of agency. Regardless of in what shape standards come, they are connected to ideas of legitimacy and stability. A solid agreed foundation, generalizable and indifferent to particularities of space, time and contexts – an idea that is contested by studies of standards in practice. Yet, what can current attempts to standardize AI transparency tell us about; how the principle is standardized, what ideas of AI transparency and standardization they are based upon, and what practices they are believed to be enforcing?
Standardizing AI transparency
To better understand ideas and practices of standardizing AI transparency, a qualitative empirical analysis was conducted on a sample of self-proclaimed standardized supports on how to achieve transparency of AI, algorithmic practices, or autonomous systems. The analysis is a way to make visible what is often submerged. Standards are frequently ahistorically confronted as fully developed forms in use, as argued by Lampland and Star (2009: 10), resulting in overlooking sociocultural and ethical aspects of standard-making. This analysis is a study of the creation of standards, in relation to the phenomenon to be standardized; AI transparency, drawing from Timmerman and Epstein (2010). It means analyzing told practices of standard-making and standardization, as well as the ideas about practices that are described in the material, of how transparency is expected to be performed through the standards. However, this analysis is not a study of how the standards function when implemented in different settings.
Analytically, this study draws from practice-oriented document analysis. It includes paying attention to not only the meaning of the central document, in this case; the standard, but also other texts about it. Furthermore, how documents relate to, and inform about, tools, sites, work, movements and issues of which it is part (Asdal and Reinertsen, 2022). This method acknowledges that documents are performative, made for a reason, and with aim to support or impact actions. A document is never neutral, but always relational: What is mentioned in the document, and what is not? Who is the recipient, and who is allowed to handle the document? […] Individuals, groups of actors, and issue elements may be defined in and out of documents and the issues they concern and shape. Documents are sources of power; they provide opportunities and spaces of action (Asdal and Reinertsen, 2022:8-9).
The empirical sample includes both official standard documents by standards developing organizations, and standardizations in the form of templates and registers. The motivation for including the latter, is their stated aim of being standardized ways to perform AI/algorithmic transparency, and an intention to refrain from a narrow view on standardization (e.g., Star, 1990). Moreover, the examples of approved formal AI transparency standards are few, which is impacting the inclusion and sample size. The sample includes four cases: The Institute of Electrical and Electronics Engineers (IEEE) 7001–2021 Standard for Transparency of Autonomous Systems, The United Kingdom Algorithmic Transparency Standard and two public registers: City of Amsterdam Algorithm register (English version) and City of Helsinki AI Register (English version). In line with the practice-oriented methodology, the material analyzed includes both the standardizations, as they are expressed in official documents, templates and on webpages, as well as articles, reports, and blog posts regarding the standards and how they came into being.
The material was thematically coded and analyzed, through which a set of recurring themes were identified: The enactment of standardization, definition of transparency, narration of technology, shaping of futures, ascribing of actors and agencies, description of risks and impacts, and presence/absence of alternatives. The results from the empirical analysis are presented in line with these themes. As a starting point, I focus how the standards are created, in which contexts and by what actors.
Standardization enacted
How the standards of AI transparency come into being can inform us on the motivations behind the standardization. The IEEE 7001–2021 Standard for Transparency of Autonomous Systems was developed by The Standards Association of The IEEE, specifically by IEEE Vehicular Technology Society and IEEE Robotics and Automation Society, as part of the P7000 Standards which are specifically targeting ‘ethics in action’ for autonomous and intelligent systems (IEEE, 2022a). The project started in 2016 and in December 2021 the standard was approved by the board of the standards association (IEEE, 2022b, 2022c). It is described by some of its authors in Frontiers in Robotics and AI as ‘the first attempt to write a standard on transparency’ (Winfield et al., 2021: 8). They further state that the standard ‘will, for the first time, allow us to be rigorously transparent about transparency’ (Winfield et al., 2021: 9). The standard is copyrighted to IEEE, and can only be accessed by purchase. It is described as a process standard, with the purpose to set out measurable, testable levels of transparency for autonomous systems.
The National AI Strategy of the United Kingdom describes the integration of standards into the model for AI governance as crucial to ensure that ‘principles of trustworthy AI are translated into robust technical specifications and processes that are globally recognised and interoperable’ (Office for Artificial Intelligence, 2021: 56). Consequently, an aim was set up to develop a cross-government standard for algorithmic transparency, to be used by government departments and public sector bodies, in line with the mission of The National Data strategy, to explore ‘appropriate and effective’ ways to provide transparency of algorithm-assisted decision-making in the public sector (Department for Digital, Culture, Media & Sport, 2020). This led to the creation of The Algorithmic Transparency Standard, hereafter referred to as The UK algorithmic transparency standard. The first version of the standard was published in November 2021, and was developed by The Central Digital and Data Office of the UK Cabinet Office (Central Digital and Data Office, 2022). It is described as consisting of two parts, the Algorithmic transparency data standard, and its template and guidance (Central Digital and Data Office, 2022). As the template is filled out and submitted, the deriving documents are to be publicly accessible in an Algorithmic Transparency Standard Collection. The press release of the standard's launching states that it is to make the UK one of the first ‘to develop a national algorithmic transparency standard, strengthening the UK's position as a world leader in AI governance’ (Cabinet Office et al., 2021).
The third and fourth cases are two public registers, The City of Amsterdam Algorithm Register (City of Amsterdam, 2022a) and The City of Helsinki AI Register (City of Helsinki, 2022a). They were instated by the municipalities, but implemented by the Finnish company Saidot, using its ‘AI transparency platform’. While not being explicitly labelled as standards, they are promoted as: … a standardised, searchable and archivable way to document the decisions and assumptions that were made in the process of developing, implementing, managing and ultimately dismantling an algorithm (Haataja et al., 2020: 3).
Sharing information and looking through windows
How the concept of transparency is treated by the standardizations somewhat differs. Frequently, transparency is referred to in terms of openness about the use of technology and related practices. When defined more explicitly, it is as transfers of information (that is truthful)(IEEE, 2022b), acts of sharing information that is complete, open, understandable, easily accessible, and free (Central Digital and Data Office, 2022) or as a window to AI use (City of Helsinki, 2022b). The IEEE standard stresses the aim to make transparency workable, testable and measurable (IEEE, 2022b). Still, its creators are open about the difficulties initially embedded in this mission: … how to express transparency as something measurable and testable. At first this might seem impossible given that transparency is not a singular physical property of systems, like energy consumption. However, when one considers that the degree to which an end user can understand how a system operates will depend a great deal on the way that user documentation is presented and accessed; or the extent to which an accident investigator can discover the factors that led up to an accident can vary from impossible (to discover) to a very detailed timeline of events, it becomes clear that transparency can be expressed as a set of testable thresholds (Winfield et al., 2021).
The perceived value of transparency is dominant in the standardizations. The UK National Data Strategy emphasizes the need for an ethical framework to build public trust, aiming for transparency to become an ‘UK value’ to be adopted internationally (Department for Digital, Culture, Media & Sport, 2020). Moreover, in the whitepaper of the registers, transparency is described as necessary to understand systems and to be able to contest decisions, stating that every citizen should have access to information, concluding with that it is ‘no wonder transparency is referred to as the most cited principle for trustworthy AI’ (Haataja et al., 2020: 3). It is however noted, in the IEEE standard, that transparency cannot single-handedly solve ethical AI: Transparency is necessary but not sufficient for reducing the risk of psychological harm or distress. Explainability is a crucial additional factor for building trust and assurance between an autonomous system and its end-users or members of the public. It is also important to note that providing an explanation does not necessarily make a system's actions completely transparent (IEEE, 2022b: 16).
Cases of standardized support for AI/algorithmic transparency.
AI: artificial intelligence; IEEE: Institute of Electrical and Electronics Engineer.
Narrating technologies and shaping futures
The standardizations are narrating technology, by creating a narrative of what AI and algorithmic technologies are, why they need to be transparent and what needs to be known about them. Although the cases are included in this analysis as pertaining to AI transparency, they somewhat differ in technological framing. The IEEE standard is specified towards autonomous systems, while The UK transparency standard includes a broad definition of algorithmic tools, also prioritizing technologies that engage directly with the public or tools that have legal, economic or similar impact on individuals, replacing human decision-making (Central Digital and Data Office, 2021) (Table 1.). The municipal registers represent two different framings of technology. Helsinki's is labelled as an AI register, while Amsterdam's is called an algorithm register, and is thereby possibly including a broader set of technologies. The AI definition used by the Helsinki register emphasizes the adaptable and autonomous capabilities of the technologies, while the Amsterdam register defines algorithms as recipes (Table 1.).
The IEEE standard also adopts a futuristic approach by trying to predict, and shape technological development. This appears in what the standard covers and how transparency is expected to be performed. The standard pushes for development of explainability models, stating that a ‘limitation is that several definitions of higher levels of transparency require techniques that have not yet been developed–to the extent that they can be readily applied’ (Winfield et al., 2021: 9). For example, non-experts are to be provided answers to ‘why’ and ‘what if’ questions, where current explainability models are considered non-sufficient. Overall, the standardizations enforce certain narratives about what AI and algorithms are and in which direction their development need to be steered, also with regards to agency.
Ascertaining actors and ascribing agencies
The standardizations are ascertaining the importance of actors, by determining who gets to be included in the standard-making, and by specifying and making claims about what actors are to be involved in performing AI transparency. Primarily, they suggest actors responsible for adhering to the standard. In the case of The UK algorithmic transparency standard and the municipal registers, adherence is described as a matter for the organization responsible for implementing a system, regardless of its developer (Central Digital and Data Office, 2022). The intended users of the IEEE standard are on the other hand the specifiers, designers, manufacturers, operators and maintainers of autonomous systems (Winfield et al., 2021).
That possibility of third party-relations impacting transparency performance, is acknowledged by the standards by remarks such as that the one responsible for standard adherence could have to rely on (and perhaps lack) information from other actors. In this relationship, the standards are expected to be of support. The whitepaper of the public registers state that civil servants and their vendors will benefit from the guidance on ‘what kind of transparency is needed and how to provide this information understandably’ (Haataja et al., 2020: 6). However, some uncertainty prevails regarding standard adherence. The report of the pilot of The UK algorithmic transparency standard states that work still needs to be done for practical implementation, including to clarify who is responsible for filling out the standard template and at what stage in a tool's lifecycle it should be done (Dickens and Elena, 2022).
Furthermore, the standards recognize to various degrees that transparency might need to be performed in different ways depending on who it is aimed for. The IEEE standard has a stakeholder-specific approach, acknowledging that actors could have different transparency needs depending on their role in relation to the system. The standard distinguishes transparency to whom; for users, for the general public and bystanders, for validation and certification agencies, for incident investigators, for expert advisors in administrative actions or litigation (IEEE, 2022b). From a contextual perspective, the IEEE standard is however lacking in granularity, since it is to be generally applicable to all sectors. The other cases, from the UK, Helsinki and Amsterdam, apply to the public sector or to governmental use, and are in that sense context specific (see Table 1.). However, in practice it pertains to a wide range of uses, from health and housing, to garbage collection and education. The public information provided is moreover intended to be read by the public, implying that it has to conform to a broad stakeholder group.
In the making of the IEEE standard, it is described as critical who get a say in design process of technologies (Winfield et al., 2021). Less (at least explicit) self-reflection is demonstrated with regards to whose voices are heard in the standard-making process. The making of the UK algorithmic transparency standard is claimed to have been influenced by workshops with internal stakeholders from public government, and external experts, in addition to a study with the public through focus groups and online communities: we understood that if the final user group of any algorithmic transparency measures is supposed to be the general public, we needed to ask them how we can be meaningfully transparent about algorithm-assisted decision-making (Domagala, 2022).
Retelling risks and imagining impacts
In the standardizations, a prominent focus is on assessing, and providing information about, risks and impacts (Table 1), implying risk evaluation to be perceived as a focal motivation for transparency needs. In the UK algorithmic transparency standard, one of the main features is to provide an impact assessment, and describe risks and risk mitigations (Central Digital and Data Office, 2022). In the IEEE standard, it is stated that the level of transparency required should be guided by an ethical risk assessment (Winfield et al., 2021). For the public registers, it is also encouraged that information is provided regarding tradeoffs between risks and benefits of implementing the technology, as well as risk levels and risk management: ‘Focus on the risks producing legal or other significant effects, and the ones posing a risk of injury, death, or significant material or immaterial harm’ (Haataja et al., 2020).
The definitions of risk and risk management are, however, open for interpretation. In the Amsterdam algorithm register, an entry about holiday rental housing fraud detection describes risk as the possible impact on alleged offenders (City of Amsterdam, 2022b). An entry in the Helsinki AI register, about a maternity clinic chatbot, on the other hand describe mitigation of risks in terms of processing personal data and making sure users understand that they are interacting with a chatbot (City of Helsinki, 2022c).
Additionally, the requirements for risk assessments seem to pertain to imagined future damaging impacts, rather than already present harm. This puts into question how much standard adherence will result in revealing unknown current impacts, or if it is rather reiterations of risks. Furthermore, self-assessment of risk is the common approach, even though the role of ‘expert stakeholders’ for certification, auditing or expert advising is also acknowledged. Assessments of this sort imply a cost-benefit analysis of implementation, aiming to ask; what if it was done otherwise?
Alternatives and alterations
In The UK Algorithmic Transparency Standard template, one section is ‘Alternatives considered’, for descriptions of ‘non-algorithmic alternatives considered, or a description of how the decision process was conducted previously’ (Central Digital and Data Office, 2022). This suggests that the responsible actor needs to consider whether the algorithmic system is actually the best option, possibly inviting a comparison with previous non-algorithmic procedures. This attention to alternatives is not, at least clearly, prevalent in the other standards. However, they all to some extent indicate repair and maintenance work of both technologies and standards. In the case of technologies, it is in the form of repeated risk mitigations and repair work. For the standardizations, it is in terms of standard maintenance.
In total, the standardizations invite maintenance and complementation, while still being depicted as stable enough to serve as standard procedures and assessable, measurable, requirements of transparency performance. Yet, they still encompass much room for interpretation.
Stabilization efforts and translational acts
With the theoretical exploration and empirical case study as points of departure, what congruences and frictions emerge in the ideas and practices of standardizing AI transparency? The empirical samples show a perceived urgency of setting standards for AI transparency. The standardizations are depicted as pioneering and important advances, centred around beliefs of what technologies and uses needs to be transparent and what is possible to achieve by transparency as well as its standardization. However, to standardize AI transparency is a normative and ontological enactment, by the standardizations way of negotiating and proclaiming what AI transparency is, why it is needed and how it should be realized. In this process, interpretations of technologies, norms, ethical values and regulations, are translated into requirements.
A prominent process within the standardizations is assessing, and providing information about risks and impacts of the use of technology. This is also how guidelines and legislative frameworks frequently approach AI governance, by risk assessments and degrees of risk, with possibly different requirements as consequence. However, the choice to focus on risk assessment implies a cost-benefit evaluation predisposing that there is a social benefit to begin with, that might be reduced by mending risks (Winner, 2020: 145). This corresponds with the critique of public registers as enforcing information disclosure after implementation, rather than initiating public discussions before an AI system is adopted (Jansen and Cath, 2021: 188). Moreover, in the cases of the public municipal registers and the UK transparency standard collection, the public is expected to embody an auditing capacity as a critical audience (Kemper and Kolkman, 2019). These are examples of how the standardizations produce relations of accountability and agencies (Woolgar and Neyland, 2013). Transparency actions also risk transferring accountability onto the people impacted by the system, as they are expected to be informed and take action if disagreeing with what is described. As such, standardizations are part of legitimizing use of technology, while also forming a type of non-democratic rule-making to govern AI (Veale and Borgesius, 2021; Ebers, 2022).
If considering meaningful transparency as actor-dependent, an important issue is to whom AI use should be transparent (Kemper and Kolkman, 2019). In general, participation and co-production with broader stakeholder groups in standard-making for AI are rarely stressed, besides concerning industry involvement. Considering the standards as boundary objects (Star and Griesemer, 1989), they could facilitate translation between actors, such as developers, vendors, civil servants, and members of the public, and work as tools for validation and certification agencies, incident investigators, and expert advisors. The stakeholder-specific approach of the IEEE standard could be beneficial for this purpose. Yet, the potential for this work to create a common understanding might be impaired by the lack of specificity of the standards, if interpretations of requirements diverge too much. To foster AI transparency as a relational process (Lee, 2021) would likely require dialogue and discussion to be meaningful (Bates et al., 2024).
Frictional ideas of transparency and standards
Conceptually, there is a friction between transparency of AI, and the foundational ideas of standards. First, when dealing with transparency of AI, what is transparency (by whom, for whom), and where and when is it? This conundrum is further intensified by the spatial and temporal instability of AI-technologies, as they are shaped by the training data used, the possible adaption (or mismatch) to the site of application, as well as the possibility of continuously ‘learning’ systems. For example, whether a system could have a harmful impact, could be dependent on time, site, and version, of its application, which is not adequately acknowledged by the standards. Second, with regard to standardization as a governance tool for AI transparency, we should also ask where governance is? And when it is? Technologies also enact governance by how they impact and steer social and material life (Woolgar and Neyland, 2013). The ideal of standardization is thus to offer stability, solidification, and predictability, while AI technologies are based on ideals of adaptability, flexibility, and learning capabilities, and furthermore, transparency is ideally see-through, light-shedding, and able to move with, and between, its targets.
Moreover, the temporal affordances of governing AI by standardization, can be found both on a structural level of fast technological development and slow regulatory response, but also on a system-specific level, due to matters such as continuous learning, historical specificity of data, and changes in the context to which AI technologies are applied. Together, these factors pose hardships of knowing when and where to audit (Ananny and Crawford, 2018; Seaver, 2019). This is apparent in the cases studied, regarding when in a system's life-cycle a transparency standard is supposed to be used. As an additional layer, the IEEE standard includes future imagined technological developments of explainability, moving the temporal affordances as also laying before us. To include anticipated technological solutions as requirements for standard adherence complicates the idea of standards as being more practical, specified and possible to adhere to, in comparison with guidelines or vision papers.
Obscurities of standardized translucency
It is suggested that there is such a thing as technical standards, and at the other end, social values. The standardizations of AI transparency however concern sociotechnical processes and entanglements, interactions and information flows. To a large extent, the standardizations analyzed in this article seek to enforce transparency as straightforward information transfer. The descriptions of public registers as windows to AI-use, suggest they are thought of as providing a clear view, to a comprehensive reality to be seen and thereby interpreted. Yet, what the standardizations are examples of, are rather productions of situated translucencies (Lee, 2021) and managed visibilities (Flyverbom, 2019), by their requirements making some shapes and actions visible, while others will remain blurred or out of sight. Even if adhering to the standards, there is much room for interpretation, so that gaze can be steered, risks chosen, and visibility managed.
Moreover, the standardization initiatives all express their novelty, and the performativity in the current governance landscape. The standardization of AI transparency might well currently be in a state of possibilities, with forking paths before us of possible directions into what AI transparency should (or could) be. Still, the standardizations are efforts to stabilize the translucence of AI and, considering their resemblance, they do not show a plethora of ways of how to perceive and perform AI transparency. Even if the empirical sample of this article is of limited scope, it suggests that we are already in a stage of stabilized ideas of AI translucencies.
Obliviating opacity, obliviating otherness
The standardizations of AI transparency are filled with dreams of interoperability. The possible friction between the general and specific, global and local, encompass the problem of likeness. For something to be measured against the same standard, it requires likeness amongst the entities to be evaluated (Busch, 2011). Does the transparency of AI and algorithmic systems, with their different developments, functionalities and uses, embody such a likeness to the degree to which it makes sense to measure it against the same transparency standard? A standardization does put into question the nature of things and bodies, and ontological uncertainties and insecurities (Woolgar and Neyland, 2013: 206). The standardizations trouble what transparency is, but also what AI or algorithmic systems are, and how to interpret the concepts of information, risk, and impact, in relation to them. While the idea behind standardization is agued to be disentanglement, by separating ‘essential characteristics’ from situations of use (Laurent, 2022), it is questionable whether transparency of such an ‘essence’ is enough in the case of AI. As argued by Jansen and Cath (2021: 190), AI systems cannot be considered in isolation from the context of their deployment. It could to some degree be solved by sector specific standards, yet, areas such as public sector or healthcare are broad, and uses, needs, and practices vary greatly.
Furthermore, the motivation for standardizing cannot avoid coming into friction with a relational understanding of AI transparency as performed in a mix of infrastructures, non-humans (systems, technologies, devices, sensors, robots, artefacts) and humans (system developers, users, professional users, data subjects, pupils, tenants, patients). It could be that a standardization is either stable enough to embody governance of transparency in a particular direction, with detailed disclosure requirements and processes, or it could be adaptable enough to enable a relational and situated AI transparency, where information practices would be contextually dependent to be regarded meaningful. The cases analyzed pose a large risk of not being specific enough in their requirements to guarantee the disclosure of possible biases and risks (Kaminski, 2020), and thereby also to not embody the idea of AI transparency as potentially empowering those that are impacted.
Standardization as governance solution for Ai ethics
Standards are never freed from ethics (Busch, 2011). This article still argues that there is an important difference between standardizing the electrical wall socket, internet protocols, or even (evidence-based) medical procedures, and the standardization of social ideals and values, such as transparency. One fundamental aspect is re-arranging the vague concept of AI transparency into requirements. This means also applying a governance strategy for goods, toys, and electrical devices, to ethical principles. It becomes part of a development of techno-regulation, in which rights enactments becomes increasingly part of ‘imagined-possibles of digital innovation’ (Rommetveit and van Dijk, 2022).
By this reasoning, this article argues that we risk a governance landscape where too much power is delegated to a paradigm of technological solutionism of how to govern AI. As argued by Timmermans and Epstein, what matters is not just the choice of which standard to use, but also the choice to make standards a main regulatory instrument to achieve important societal goals: Just as the choice of one standard over another signals a preference for a specific logic and set of priorities, so the choice of standards of any sort implies one way of regulating and coordinating social life at the expense of alternative modes (Timmermans and Epstein, 2010: 85).
In total, there is a danger in treating standardization as a magic harmonizing solution, as is shown by the complexities of standardizing AI transparency and relying on it to achieve ethical AI. It calls for a more sober view, where we, by the words of Lampland and Star (2009), disrupt society's romance with standards which is causing inflated expectations of what standards can accomplish. Instead, standardization should be acknowledged as a complex sociotechnical process of high value for specificity and interoperability, but with limited capacity to safeguard important ethical values.
Conclusions
There is a great need to specify what AI transparency should be. However, as this article explores, there is a conceptual tension between AI transparency and standardization, in terms of the ideas behind the concepts and what goals they are aimed at. While standards are underpinned by ideas of solidification, stability and clarification, transparency is considered to carry a flexible, see-through quality. Furthermore, AI-technologies are depicted as ‘black boxed’, complex and in flux. A lot of hope is put into transparency, perceived to be able to repair problems of opacity, and support accountability and fairness. It has however been problematized whether transparency works as a solution to ethical and responsible AI. In the empirical cases of standardizations of AI transparency analyzed in this article, transparency is largely presented as a static measurable value, a straightforward transfer of information, or as a window to AI use. The complexity of meaningful AI transparency, and of standardization, needs to be acknowledged in the discussions regarding how to govern AI, and specifically important sociotechnical values for AI, by standardization.
The empirical cases claiming to be standardizations of AI transparency are described as pioneering and able to shape technological futures. Still, the resemblance between them, of how transparency is to be performed, suggests that the making of AI translucencies is already stabilizing into similar arrangements. To heavily rely upon the use of standards to govern AI transparency, could yet represent an empty promise of governance, due to both the instability of AI-technologies and translucencies thereof, and to the room for interpretation and manoeuvre that still persists within standardized AI translucencies. Standards are believed to reduce uncertainty, but depending on how they are formed and framed, they risk to also produce a new distribution of uncertainties and accountability. Additionally, standardization entails allocating rule-making to non-democratic processes. There is an overarching risk that the governance of AI in general, and transparency specifically, is let to be too shaped by technological solutionism, and thereby allowing for the standardization of social values (or even human rights) to be carried out in the same manner as any other technical product or procedure.
Footnotes
Acknowledgements
I would like to thank the editors of Big Data and Society, as well as the anonymous reviewers, for providing thoughtful and constructive comments. I am in gratitude to readers who have offered valuable comments and guidance along the way, including Katja de Vries, Francis Lee, Stefan Larsson, James White, and the rest of the AI and Society team at Lund University. I would like to acknowledge the Swedish Research Council for the funding of the programme Artificial Intelligent use of Registers (AIR Lund). I also want to thank the members of AIR Lund and Mammography Screening with AI (MASAI), and The Wallenberg AI, Autonomous Systems and Software Programme – Humanities and Society (WASP-HS), for generously providing interdisciplinary insights into the sociotechnical matters of AI.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Vetenskapsrådet (grant number (ref. 2019-00198)).
