Abstract
What does it mean to ‘put principles into practice’? As machine learning algorithms and Artificial Intelligence are given increasing control over our lives (delivering credit scores and welfare risk assessments and monitoring borders with facial recognition), public, private and civil society organisations have proliferated numerous guidelines foregrounding different ethical principles (e.g. – fairness, accountability, transparency) meant to ensure that these systems do not cause harm to already marginalised groups. Putting these principles ‘into practice’, however, is not as straightforward as it seems. The Algorithm and AI Register, which collects information about algorithms used by city governments (namely Helsinki and Amsterdam), is one recent attempt to make good on these ethical principles, but it has been subsequently criticised for not living up to its own lofty ideals. This article is based on long-term ethnographic fieldwork with one of the companies behind the Register. Building on recent work from valuation studies, which studies empirically how abstract values are enacted through mundane routines and procedures, we argue that rather than moving from principles to practices, downstream as it were, the process is more iterative and it is not just practices but also principles which are shaped in the proceedings. We introduce two concepts, valuable action and actionable values, to sensitise researchers to the deeper interrelation of values and actions and argue that in order to make more ‘ethical’ algorithms we need to think more symmetrically about principles and practices.
Introduction
On Monday, 28 September 2020, the world’s first Algorithm and Artificial Intelligence (AI) Register was publicly launched (online) at the New Generation Internet Policy Summit. The Register, which takes the form of two websites, aimed at providing an overview of algorithmic systems used by the City of Amsterdam and Helsinki, was showcased through a brief promotional video set to rousing music. According to the video, the Register was created to respond to the ‘citizens’ need’ for understanding how algorithms used by governments worked and how they affected their daily lives.
Machine learning algorithms have been used for everything from assigning credit scores, to matching job candidates with jobs to predicting the reoffending of criminals. Yet because these algorithms are trained on past data, they often exhibit biases towards various groups, already marginalised in society, as has been shown by research from sociology (Airoldi, 2021; Joyce et al., 2021; Lupton, 2014; Marres, 2017; Orton-Johnson & Prior, 2013) and the social sciences more generally (Crawford, 2021; Eubanks, 2018; Gillespie & Seaver, 2016). The Register, the video continued, ‘gives citizens the transparency they deserve’: citizens could now ‘Learn why and how algorithms are being used and who is affected; or dive into: What data is being used; How it’s processed; How discrimination is prevented; The degree of human oversight; and How risks are handled.’
The Register emerged out of discussions within AI ethics, which we will use as a catch-all term to designate attempts to regulate, tweak or critique algorithmic systems based on their potential social harms. Over the past several years, numerous public, private and civil society organisations have proliferated AI ethics guidelines proclaiming the importance of adhering to certain ethical ‘principles’ (e.g. fairness, accountability, transparency) in the development of the algorithmic systems. Yet for the most part, they have said little about what this might mean in practice (Fjeld et al., 2020; Jobin et al., 2019). Thus the need to ‘put principles into practice’ has become a repeated mantra within the AI ethics community more recently (Floridi, 2019; Morley et al., 2020). The Register which is framed in the launch video and elsewhere as an embodiment of the principle of transparency is an explicit response to these calls.
The Algorithm and AI Register was generally positively received and even explicitly endorsed by some of the most prominent actors in the community (Floridi, 2020) but the Register also received criticism, echoing broader dissatisfaction with the toothless or tokenistic nature of mainstream AI ethics initiatives (Greene et al., 2019; Sloane, 2018; Wagner, 2018). Corinne Cath and Fieke Jansen cautioned that, with no legal obligations to report algorithms in use, the cities only needed to share the least controversial, mundane technologies, as opposed to algorithms running in welfare provision or policing, with the potential for much greater harms (Cath & Jansen, 2021). Moreover, while informing the public, the Register, Cath and Jansen argued, ‘normalises’ the use of algorithmic systems in cities, sidestepping the important question of whether such systems should be used at all. As with all transparency initiatives, shining a light on one thing has the effect of concealing something else (Holmberg & Ideland, 2012; Strathern, 2000).
This situation raises lots of questions about the relationship between principles and practices. How do we know when some practices make good on a principle? Which principles are the relevant arbiters? Can principles be corrupted or narrowed over time and if so, how? Is it fair to judge concrete endeavours with the benchmark of abstract ideals?
While we cannot hope to answer these questions definitively, in this article we want to interrogate some of the taken for granted assumptions about principles and practices. For example, it is assumed by many in the field that we must move from the principles to the practices, downstream as it were (Hagendorff, 2022; Mittelstadt, 2019). There is also an assumption that to evaluate such endeavours, we need to focus on the end products and their consequences. In this article, we aim to instead consider in detail the overall process of putting principles into practice. To do that, we draw on long-term ethnographic fieldwork conducted in Finland, and online, within the AI ethics field. We focus particularly on our engagements with a Finnish start-up company which played a pivotal role in the Algorithm and AI Register’s creation.
By retracing the emergence of the Register, we demonstrate that, rather than a linear and mechanical process, putting principles into practice is more a matter of what we will call practical ‘tinkering’ (Mol et al., 2010). This concept has been used before to describe the work of fitting practices to various principles (or values) (Mol, 2010), but here we argue that one might think of principles being tweaked as much as the practices, through constant shuttling between the two.
After reviewing recent work in AI ethics and applied ethics concerning the relationship between principles and practice, we then turn to work from sociology and particularly the field of valuation studies. In contrast to much work in sociology which sees values as an explanation for social order or action, valuation studies looks at values as ‘performed’ through situated practices and technical artefacts, as we will explain. Building on this work, we introduce a pair of concepts: ‘valuable actions’ and ‘actionable values’, which emphasise not only the interrelations between principles and practice but also potentialities of values and action that are calibrated through the process of tinkering. In the analysis that follows, we identify four key moments in the conscious move from principles to practice. These moments are not meant to be exhaustive but rather represent exemplary moves in the process as we see it. In each moment, we find that the remit of possible valuable actions/actionable values is progressively narrowed as a result of discursive pronouncements, technical artefacts and practices. While this pattern might not repeat itself in every case of implementing principles, we suggest that some form of back-and-forth iteration is likely at work.
We hope our analysis of the process will help both social researchers and practitioners in AI ethics to be armed with a more honest and realistic account of what putting principles into practice actually entails. We argue for a more open-ended and iterative approach to relating principles and practices and call for more open acknowledgement and examination of the tinkering and negotiations involved.
AI ethics and applied ethics
In 2019, Floridi was one of many voices calling for fellow AI ethicists to put their numerous principles ‘into practice’. He also gave several warnings about the pitfalls of poorly applied principles, five of which he named:
- ethics shopping: selecting principles which best justify already taken actions
- ethics bluewashing: tokenistic actions, ethics as a PR campaign
- ethics lobbying: ethics as alternative to legislation
- ethics dumping: exporting unethical practices to countries with laxer laws
- ethics shrinking: doing the minimum when more is not demanded (Floridi, 2019).
The response to Floridi’s and others’ calls ensued swiftly. Interestingly, this happened not in the form of many principle-led projects but rather through further frameworks and guidelines and toolkits (Fetic et al., 2020; Zicari et al., 2022). Jessica Morley and her colleagues, for instance, provide a framework for moving from what they term the big five principles to specific practices, noting a series of actions which might be involved with these principles in different sectors (Morley et al., 2020).
Nevertheless, these proposals raised further doubts. In a follow-up piece, Floridi and Cowls (2019) cautioned that many implementations of principles were too flexible (susceptible to ethics washing) or too rigid (unresponsive to context). Canca (2020) noted that one of the problems with implementing AI principles is that they do not distinguish between ‘core’ principles (like justice, which have inherent value) and ‘instrumental’ ones (like transparency, consent and privacy) which only serve to help protect or implement the core principles. As Haggendorf points out, the problem with all these attempts to put principles into practice is that:
. . . all frameworks still stick to the principled approach. The main transformation lies in the principles being far more nuanced and less abstract compared to the beginnings of AI ethics code initiatives. (Hagendorff, 2022, p. 3)
These pitfalls in implementing principles are nothing new to the field of applied – or practical – ethics. In A Companion to Applied Ethics Beauchamp (2003) breaks down the field into three general approaches. Firstly, there are ‘top-down’ approaches: general norms or principles applied to new particular situations. This is like the somewhat facile version of ‘putting principles into practice’ held by some AI ethicists (Seger, 2022) and suffers from the obvious critique that abstract principles will always be insufficiently responsive to the specificities of local contexts. Secondly, there are ‘bottom-up’ approaches, that is, starting with existing social arrangements and practices and abstracting from them. Beauchamp notes that this does not somehow avoid abstractions or theorising all together – one must have some idea of what counts as ethical behaviour (or ‘normal’ behaviour) in order to select which real-world behaviours are pertinent to aggregate and abstract. Finally, ‘coherentism’ represents a sort of compromise between the two in which principles are, over time, matched with practices. One important lesson from applied ethics, not always acknowledged in AI ethics, is that principles are never alone, there is always a field of relevant principles which might be relevant guides for actions and which may come into conflict (O’Neill, 2018).
In summary, it is by no means self-evident how one is supposed to go about putting principles into practice. Interestingly, what unites all these discussions is that they attempt to settle the matter of putting principles into practice in general and in the abstract, with scant reference to specific cases or situations. While these discussions within ethical philosophy raise important points about principles and practices, which we will return to, we see a benefit in looking more sociologically and empirically at the process, not as a final arbiter of theory, but as a way of understanding the challenges in putting principles into practice.
Sociology and values
It should be noted that in wider AI ethics discussions, practitioners often also use the term ‘values’ interchangeably with principles, often in relation to aligning AI with ‘human values’ (Christian, 2021; Russell, 2017). Values and principles are not necessarily the same: we might think of values as (largely unspoken) ideas of the good, proper or desirable within a community or culture whereas principles are more explicitly agreed upon. Principles tend to populate official policy reports and public statements rather than everyday social interactions. Yet, like principles, values also provoke questions about their relations to practices to the extent that they are guides for action.
Although there is not space here to look at this in detail, values have been discussed in sociology going back at least to Durkheim (1912/1995) and Weber (1917/1978). Talcott Parsons (1937) made values central to his social theory: shared values help regulate society and establish norms. But as sociology turned away from Parsons’ functionalism, the study of values largely migrated to anthropology (Dumont, 1986; Kluckhohn, 1951) and social psychology (Rokeach, 1973; Schwartz, 1992), where they were seen as unconscious drivers of behaviour (see Spates, 1983). Sociology, which adopted more critical, Marxist frameworks, has generally focused on ‘value’ in the sense of ‘worth’ (Graeber, 2001) and theorising alternative understandings of value in contrast to economic exchange value (for example ‘use value’). The study of plural ‘values’ in the sense of ‘ideas of the good’ (Graeber, 2001) has been, arguably, dormant within sociology (Hitlin & Piliavin, 2004) and scattered (Lamont, 2009). Some notable exceptions are Raymond Williams, who in his book Keywords (1977) collected a series of words whose variable interpretations, he argued, revealed conflicting values. And Beverly Skeggs (Skeggs, 2004, 2014; Skeggs & Wood, 2012) who has been interested in the relationship between the singular economic ‘value’ of capital and the plural ‘values’, in the sense of ‘moral understandings of what matters to people’ (Skeggs & Loveday, 2012, p. 475), specifically: how plural values are subsumed under a singular value of exchange (Skeggs, 2014) and how working class subjects defend their value (worth) through mobilising values like ‘care’ and ‘honour’ which diverge from dominant, middle class values like ‘respectability’ and ‘individualism’ (Skeggs & Loveday, 2012). For Skeggs, value and values are deeply interrelated.
Much of this sociological work is interested in questions of how society is regulated, how power relations are reproduced (Spates, 1983) or how the self is constructed in relation to the dominant order. This is not our concern here, though we do acknowledge that there is power in operation anytime so-called ‘shared values’ are invoked and these may crowd out more marginal values. We are also not particularly concerned with the relationship between values and economic value, though the later frequently haunts discussions of AI ethics in the private sector. The pursuit of profit is often something which is positioned as a constraint on ethical pursuits.
Finally, wider social science work on values, including in anthropology and social psychology, generally aims at extracting hidden or implicit values held by people, through surveys and probing interviews. In this article we are more interested in the explicit use of values – what happens when they are invoked by people and how are they ‘put into practice’. For this we can turn to a more recent constellation of work, which takes a more interactional and practice-centric look at values.
In On Justification, economic sociologists Boltanski and Thévenot (1991/2006) analysed how actors justified particular actions with reference to universal principles (or values), ideas of the common good, which they divided into six orders of worth (inspiration, domestic, fame, civic, market and industrial). Values, in their usage, are not determining of action but resources to draw on in arguments and what they call ‘tests’. Stark (2011) continued this line of reasoning but focused on value conflicts as productive, rather than as something which needs to be settled (see also Sharon, 2021 in relation to tech). Looking empirically at how values are used is crucial for our approach but we are interested in their use in long-term projects not just moments of conflict.
The field of valuation studies, arguably, first launched in this journal (Adkins & Lury, 2011; Muniesa, 2011), draws on pragmatist philosophy (Dewey, 1939) to analyse how values, rather than being held by people or residing in objects, are ‘performed’ through situated practices, discursive utterances and material artefacts (technologies, metrics formulas, etc.) (Dussauge et al., 2015; Helgesson & Muniesa, 2013). This idea of ‘performativity’ (Law & Urry, 2004; see also Butler, 1990 for a related usage) draws on science and technology studies (STS), which has historically looked at how scientific practices (help to) make the realities they purport to describe (Latour & Woolgar, 1979) through both discursive and material means. 1 Accordingly, much of this work focuses on how technologies and procedures like competitions and rankings perform value (as in worth) but it also extends to studies of how plural values are performed.
Callon (2007) dispels some misunderstandings about performativity, for example that it necessarily connotes ‘fake’ (as is often misunderstood in Goffman’s [1959] work on social performance), 2 or that it takes place only in language (for example, in Austin’s [1975] work on performative utterances – there are legal, material infrastructures and social arrangements which allow ‘I baptize you’ to make it so – see Blok [2017]). Callon also cautions that performative utterances are not magic: invoking a value like fairness or transparency does not itself make it real (Merton, 1948). Performances (convincing people, assembling allies, enforcing boundaries) can and do fail, and the mechanisms by which they work must be accounted for empirically.
Thus, researchers in valuation studies, mostly employing ethnography and document analysis, tend to show how ‘valuation practices’ shape the world but not necessarily in the ways that are intended or expected. Now it is important to note that when scholars within valuation studies conceive of values as instantiated in practices, it is an analytical or methodological move which is meant to draw attention to the, sometimes hidden, politics and complexity of working with values. They are also not specifically advocating for putting values into practice (though they often advocate greater focus on practices) but are rather trying to collapse the distinction. They are also, for the most part, not specifically looking at instances where the relationship between ‘values’ and ‘practices’ is explicitly at stake.
Tinkering with values and actions
Some noteworthy exceptions can be found in the edited volume Value Practices in the Life Sciences and Medicine (Dussauge et al., 2015), where multiple chapters deal directly with this values–practices relationship. Roscoe (2015, p. 116), for example, who looks at organ transplantation, argues ‘that values and practices exist in a complex relationship; as normative discourses feed into the organization of allocation protocols, so the protocols reconfigure the meaning of those norms’. Zuiderent-Jerak et al. (2015, p. 133), who study the use of markets in healthcare, ‘propose to accept that public values are shaped in practice and that therefore the relationship between policy aims and consequences can never quite be captured through the logic of implementation’. This prompts them to argue that policy makers should be more ‘experimental in the way that they implement values’, not in the sense of ‘controlled experiment’ but in the sense of an open-ended exploration (Hacking, 1983) in which different arrangements of values and practices must be trialled in situ. If values are going to be shaped through practice, then why not try out different values or understandings of values?
We suggest that this can be thought of as a sort of ‘tinkering’. Annemarie Mol (2006), working in dialogue with STS and also valuation studies (Heuts & Mol, 2013), introduced this notion as a reaction against ‘top-down’ ethics approaches in medicine (Mol et al., 2010). She sees tinkering as negotiating tensions between different values in practical settings, for example in a care home kitchen (Mol, 2010) where the values of safety, ambiance, dignity, taste, etc. must all be balanced and apportioned without any particular value triumphing once and for all (see also Oute & Rudge, 2019). Mol (2006) describes tinkering as ‘creative calibrating of elements that make up a situation, until they somehow fit – and work’ (p. 411). Tinkering, thus, helps us get away from procedural notions of ‘implementing’ principles. This literature gives many examples of professionals tweaking and coordinating daily practices and material objects, even the subtleties of verbal exchanges, but we want to argue that it is also useful to think of values being tinkered with – or at least their meaning, connotations and associations with other values, objects and practices. Values are meaningfully changed through the process of relating them to practices.
We continue this line of enquiry by looking in greater detail at the specific repertoires involved in ‘putting principles into practice’: what is entailed in situations where this specific phrasing is invoked? While much of the tinkering literature focuses on those front-line workers tasked with implementing regimes of values, we will focus more on those designing those regimes. Now, in order to distinguish between actual invocations of the emic terms ‘principles’ and ‘practices’ and our own analysis we will continue to use the analytic concept ‘values’ to designate various abstractions actors work with, and we will use the term ‘actions’ to refer to localised doings, always understood as ‘situated’ in the sense that ‘every course of action depends in essential ways on its material and social circumstances’, as anthropologist Suchman (1987, p. 70) has put it. 3
Building on work in valuation studies, we argue that we need to acknowledge more the interdependence of values and actions. We could use the concept ‘value-actions’ but actually, the tension, the incompleteness of their alignment, is a key part of the process. They are more like two poles always pointing back at each other and, as we will see, the process we describe is all about shuttling between these poles. Instead, we will introduce the concepts of ‘actionable values’ (values which can potentially be associated with actions) and ‘valuable actions’ (actions which can be considered as instantiating values) which emphasise this mutual referentiality. The -able suffix also emphasises that, in the process of putting principles into practice, the union between the values and actions is a fragile one, always ready to fall apart but also to be configured anew.
Methods and case
In this study we draw on ethnographic fieldwork conducted within the AI ethics community by the second author of the article, since March 2019, in the Finnish capital region and online.
As we mentioned earlier, the activities of one Finnish start-up company were instrumental in the Register development and it is precisely those activities that we draw upon in the article. Due to the small number of actors working on the Register, it would be impossible to guarantee the anonymity of the company (though we chose to pseudonymise it) but we decided to anonymise individuals, and their roles in vignettes coming from fieldnotes (behind closed doors) to minimise any representational damage. The company – which we designate as AI Ethics Inc. – was one company which helped develop the technical infrastructure upon which the Register ultimately would rest but also steered a wide range of activities that led its creation, making it an ideal vantage point from which to view the extended development process. AI Ethics Inc., which was established in 2018, has always been concerned with the interplay between values and action, as the company’s representative explicitly pointed out in a meeting in March 2019: ‘Values define who we are, we hope everything we do embodies those principles.’
The second author was put in touch with AI Ethics Inc.’s CEO by a colleague who met her at a panel. It was agreed that she could attend meetings and take fieldnotes. The University of Helsinki does not require clearance from the Research Ethics Committee in the Humanities and Social and Behavioural Sciences unless the research contravenes informed consent, has the potential to cause bodily harm, involves vulnerable populations, or which could cause stress which exceeds normal working conditions. 4 We did not feel that this research exceeded any of these conditions so did not seek formal approval; however, we still deemed it best practice to draw up a written plan of collaboration, which AI Ethics Inc.’s CEO approved. This included a research overview, as well as terms of collaboration – for example, the second author was given access to internal documents and informants were able to withdraw from the study at any time. Moreover, we agreed to send article drafts to the CEO prior to publication, a practice we honoured in this case.
While most of the material presented here derives from 2019 and 2020, we have continued to periodically follow the activities of AI Ethics Inc. to this day. Our methodological approach is therefore best understood as a ‘patchwork ethnography’, a term developed to describe ethnographic endeavours consisting of short-term field visits, which are, nonetheless, a part of ‘research efforts that maintain the long-term commitments, language proficiency, contextual knowledge, and slow thinking that characterises so-called traditional fieldwork’ (Günel et al., 2020). Although sporadic, the fieldwork has yielded a large corpus of data consisting of fieldnotes, documents, links, video and audio materials, compiled by the second author. This allowed us to gain a clear sense of the order of events and locate statements which proceeded practices or the other way around. As already discussed, ethnography is the go-to methodology of valuation studies, but key to the study of performative practices is the variety of data which allows one to make contrasts and connections between ‘what is said’ and ‘what is done’, between written statements and casual ‘shop talk’; or between material artefacts (including spreadsheets, diagrams and databases) and the practices they shape or are shaped by (Akrich, 1992).
Appropriately enough, given the topic at hand, our analysis emerged through the continuous dialogue between our theoretical interests and empirical material. Equipped with insights from the literature, the first author asked questions of the material the second author collected, and the second author, in turn, provided correctives to his assumptions. Much as with values and actions, it is through this oscillating process that the argument we are advancing in this article was shaped.
Analysis
Demanding valuable action
Our first observation about the self-conscious task of ‘putting principles to practice’ is that it is necessarily preceded by performing a gap between the two. Although this gap seems utterly natural, somehow part of the nature of principles and practices respectively, it is important to understand that it is debatable which sorts of activities are ‘abstract’ or ‘out of touch’ and which are ‘realistic’ and ‘grounded’. For example, talking about principles can be rendered by some as ‘just talk’ or ‘hot air’, while in other situations it can be presented as moving towards ‘doing something’. Similarly, we can understand boardroom discussions between besuited executives about company principles as divorced from ‘reality on the ground’ but these boardroom meetings might be equally governed by similarly out of touch management principles. Performing a gap between lofty values and concrete actions, we propose, helps define a space which can be filled by what we are calling ‘valuable action’, that is potentially value-laden or value-directed actions.
We have observed numerous invocations of this gap between principles and practices in the AI ethics debates over the past few years. Most recently, in September 2023 we attended remotely a workshop on the topic of ‘ethical and responsible AI systems’ with a whole breakout session dedicated to ‘putting ethical and responsible AI into action’ (emphasis ours), where the key focus was on the move from ‘high-level ethical principles’ (emphasis ours) towards developing and deploying concrete instruments. However, we could observe such utterances already at the beginning of fieldwork in the spring of 2019, when the second author was invited to attend a European working group meeting, remotely, with an employee of AI Ethics Inc. in their premises.
The first part of the 2019 meeting was dedicated to discussing ‘practical tools for embedding values and ethics’. By the second presentation of the morning session, our informant was gradually becoming frustrated. ‘This is going to be a long day’, she sighed. While part of the problem was the poorly delivered hybrid format of the event, the presentations’ content played a key role in her dissatisfaction. This became obvious during the third presentation, when the employee decided that in addition to having her microphone turned off, she would also shut down her camera. She stretched herself and yawned, saying ‘This is so high level’. When asked to elaborate what she meant by it, she added that it was ‘interesting but removed from practice’. Such utterances indicated that certain ethical or philosophical practices, including those that supposedly revolve around ‘practical tools’, could also be seen as being out of touch, divorced from practice or reality. This occurs by discursively separating actions into different levels.
Our informant recovered a bit during the last short talk of the first session, which was given by a professor of public understanding of ethics. She really liked the professor and would repeatedly state ‘Fascinating lady’ or ‘I like this lady’. When asked what in particular she liked about her, the informant said that it was her ‘practical’ approach towards ethics. The main message the professor was trying to disseminate was that ethics was something that was ‘doable’ and that companies should not panic as there were always going to be ‘trade-offs’. This invocation of trade-offs, we note, helps enforce the separation between an abstract world of values, with logical rules for manipulating them, and an apparently messy world of actions in which values (and actions) inevitably conflict and compromises must be made. The world might look the same as the one where ‘high-level talk’ happens, furnished with desks and laptops and PowerPoint presentations, but it is a world of complications and compromises. This is the world in which AI Ethics Inc.’s employees see themselves as operating, their job is to balance the interests of the tech industry with the rest of society.
Now certain approaches to ethics or applied ethics may indeed be hard to implement, out of touch, armchair exercises; but our point is merely that this distinction is not self-evident and needs to be redrawn for different situations. Noting the difference is essential to creating the need for valuable action but also for establishing a perimeter around the space of acceptable valuable actions. It also has the effect of banishing certain ethicists and policy people from this space: it enforces a boundary between those who are poised to address this gap and those who are not. Not all actions – nor all actors – will do! This initial act of rhetorically clearing the terrain between values and actions, to be populated with valuable actions, is our first form of ethical tinkering.
Arranging actionable values
In the common-sense, or ‘top-down’ notion of putting principles into practice, one starts with a pure value and then one implements it through a set of actions, or a technological artefact. Yet from the perspective we have adopted, this is a somewhat simplistic view of what happens. Rather, as we saw in our material, there are often multiple values in play which must be selected from and made to relate to each other.
A good illustration of this is the text on AI Ethics Inc.’s website, whose home page kept changing throughout the fieldwork. At one point, one could see the sans-serif word ‘democracy’ was emblazoned prominently across the home screen, at another point, this was replaced by ‘explainability’. ‘Trust’, which plays a very important role in the AI ethics discussion at the European Union level (Chatila et al., 2021; Smuha, 2019), was also often highlighted, but we could also occasionally notice less prominent principles in the field of AI ethics being featured, such as ‘Environmental, Social, and Corporate governance’. One might view this magpie-like approach to values uncharitably as ‘ethics shopping’, in Floridi’s (2019) terminology, but it is important to note that this was not undertaken retroactively in order to justify some actions (or plans) which had already been decided upon. This use of different principles, we argue, was part of an exploratory process to define which values are the best fit in certain circumstances.
The particular constellation of principles was not arbitrary either. From the very beginning, transparency was at the heart of AI Ethics Inc.’s endeavour, something which was already clear at the first meeting we were invited to attend at the end of March 2019. During the meeting, two of the company’s representatives presented the first iteration of its main product prior to its launch: a platform which would later serve as a backend for the AI Register. They also talked about the company’s background and mission: ‘we are not trying to solve the whole [of] AI ethics, but we believe we can provide support and means for transparency and accountability: these are foundational’.
When we met with these representatives privately in person a month later, we discovered that what led to this conclusion was the discussion about machine learning algorithms as ‘black boxes’ (Pasquale, 2015). They also mentioned an AI4People paper (Floridi et al., 2018), which emphasised ‘explicability’ or the ability of machine learning algorithms to explain their behaviour. Interestingly, in the AI Ethics Inc.’s interpretation, as we will see, explicability was equated with transparency and what made transparency particularly appealing was that it would be easy to make it concrete. This is because, it has been argued, transparency’s connotations of morality and accountability have over time been reduced to the simple matter of providing information (Corsín Jiménez, 2011).
During the pre-launch meeting, the company representatives were talking about the first iteration of their platform as a Minimum Viable Product (MVP).
5
At that occasion, transparency emerged as what we might call the Minimum Valuable Product in the sense that they were searching to define a value uncontroversial enough that everyone could agree on it: transparency was enacted as the lowest common denominator. Yet, the selection of transparency can only be understood in the context of relations between it and other values, like explicability or accountability. Perhaps the most obvious expression of this could be found in the White Paper published upon releasing the AI Register. While the Register was defined ‘as a means for transparency’, transparency itself was defined as a means for other enacting values:
The primary purpose of transparency in the context of AI is to allow stakeholders to understand how the system works, how its decisions were done (‘explainability’) and to contest its behaviours (‘accountability’). Transparency not only enables accountability for people directly impacted but also allows the people responsible, independent auditors and civil society activists to evaluate the workings of the system. In the public domain, transparency is a means for protecting democracy and everyone’s right to influence society’s development and their living conditions. (Haataja et al., 2020, p. 5, our emphasis)
Notice how the choice of words ‘allow’ and ‘enables’ positions transparency as a precondition for the other values of explainability, accountability and democracy while displacing responsibility for actioning those values to other actors at other times. In fact, for AI Ethics Inc., transparency is not considered as a value as such but a ‘proof of ethics and a critical means of evaluating values embedded in technology’, as the company’s representatives would state while recounting their origin story during the pre-launch meeting. This is similar to how others see transparency as an ‘instrumental’ principle, guaranteeing others like fairness (see Canca, 2020, mentioned earlier).
The phrasing above, which is representative of other AI Ethics Inc. materials, also takes for granted this relationship between the values. No mechanism linking them needs to be described – the relationship is assumed. It could be seen as a hierarchical value relationship (Dumont, 1980), but it is not clear if transparency is above (like an umbrella) or below the others (like the roots of a tree). It is, in any case, a sort of diagram connecting transparency to various other values.
What we hope to have shown is that the ‘process of putting principles into practice’ also involves selecting from possible principles and arranging them in relation to one another. These relationships between values also change in what ways they are actionable – to place transparency as instrumentally securing ‘democracy’ and ‘fairness’ is to emphasise certain understandings at the expense of others. For example, transparency is rendered as an opportunity (or a threat) to be taken up when necessary: a potential, rather than something lived with day-to-day, as will become clear later. In some ways this process of arranging actionable values shrinks the remit of the project (by focusing on just one aspect of AI ethics), but in other ways it expands the remit, at least in its supposed implications, by invoking relationships to more ambitious core values which need not be directly acted upon. This arranging of actionable values is the second mode of tinkering we identify.
Allocating valuable actions
The next type of tinkering we identify is ‘allocating’ valuable actions. This is the most explicit part of the process of making a connection between valuable actions and (already selected) actionable values. Within AI ethics circles, this is often referred to as ‘operationalisation’ (Fetic et al., 2020) – much in the same way that abstract concepts might be ‘operationalised’ through specific questions on a social survey. Operationalisation, however, is not, in our experience, a simple translation process, but rather looks like a search for which valuable actions are relevant to specific actionable values. Again, this could also be viewed suspiciously as a kind of ‘action shopping’, but this would be to assume that, for principles like transparency, it is self-evident what sort of actions they call for.
A good example of the operationalisation process was the project entitled ‘Citizen Trust Through AI Transparency’, which directly preceded the creation of the Register. This project started in June 2019, a couple of months after the MVP launch and ran for half a year. The project involved the cooperation between the representatives of several Finnish public institutions, facilitated by AI Ethics Inc. As the company’s representative explained during a hybrid meeting arranged by AI Ethics Inc. in mid-May 2019 at the company’s premises and on Zoom, the project vision was to ‘establish [a] human-centric approach on public sector AI through strong citizen-participation and by creating [an] international benchmark on AI transparency’. The ambition was, for Finland to ‘be used as an international testbed for moving AI ethics principles to practice’.
The purpose of the meeting (lunch buffet included) was, in fact, to attract representatives of different organisations to join it. Unfortunately, only half of those invited ended up attending the meeting, with representatives of three cities being mostly online, while representatives of the Institute of Electrical and Electronics Engineers, the Ministry of Justice, AI Ethics Inc.’s staff and the second author of this article attended in person.
The main issue, an AI Ethics Inc.’s employee stated during the meeting, was how to approach the question of trust and how to keep people in the loop. This was to be done by focusing on ‘transparency in action’ and to ‘define shared practices for AI transparency across the Cities and Governmental agencies in consortium organizations’. This entailed amongst other things: to build a shared ‘metadata model’ (a sort of template for a database) for Public Sector AI transparency and to pilot the model using the AI use cases from the consortium organisations.
The model, which was released at the end of November 2019, took the form of an Excel sheet containing a list of key information that needed to be communicated about algorithmic systems, which AI Ethics Inc. synthesised by combining expert and citizen interviews, as well as feedback solicited from the members of AI ethics community. This was based on AI Ethics Inc.’s initial data model (their MVP) but it was expanded to include disclosures that went beyond the specificities of the algorithmic application to consider its societal implications. The Excel sheet included questions for people deploying algorithmic systems in the public sector such as ‘Has the service been designed for all, and what actions have been taken to promote realisation of equality?’ and ‘Has the data protection impact assessment (DPIA) been conducted for the system?’ Interestingly, this also meant, in concrete terms, bringing in other values (in this case non-discrimination, fairness and privacy) as organising principles for disclosures – so not just transparency but the other values it was made to relate to were embodied in the Register. Later, the values guiding disclosures underwent some transformation – for example disclosures around ‘fairness’ became disclosures around ‘equality’ while privacy was removed altogether – but the different perspectives that the ‘Citizen Trust Through AI Transparency’ project brought together, (not just those of technical developers, but also legal and AI ethics experts and citizens) remained represented in the Register as well. These specific perspectives also acted as constraints, since they, in effect, crowded out other potential views, thus circumscribing the possible valuable actions the Register embodies.
Thus, the various actors participating in the process tinkered with which valuable actions, that is which specific sorts of disclosures, should be undertaken. While this still might seem suitably abstract or ‘high level’ in the sense that it involved Excel spreadsheets, and was generic to any type of algorithm, one could also see it as relatively concrete in that it forces people to make definitive cuts – to include things or not and to arrange them in certain ways not others. These Excel files might seem flimsy, but they are material in the sense that they helped to constrain future actions (Akrich, 1992) both for their creators, who become increasingly bound to them, and to those who would ultimately make the disclosures. For example, once ‘privacy’ was removed from the Excel file, it seemed difficult for anyone to re-open the matter of whether to include it.
Simultaneously, a community of interested parties, including developers, civil servants and citizens, was assembled and these became a willing audience for the valuable actions, which they helped to shape. Again, operationalisation inevitably shifts the remit (and responsibility) of the project and the scope of what the values make actionable. This allocating of valuable actions to corresponding actionable values is the third mode of tinkering.
Tuning actionable values
In the White Paper, the Register is defined as ‘a window into the artificial intelligence systems used by a government organisation’ (Haataja et al., 2020, p. 5). But to run with the metaphor, it is a window as opposed to a door, in that the Register lets citizens and the general public peek into these systems but they are not allowed to engage with them directly. In fact, we might think of the Register as a window with venetian blinds whose variable opening is supposed to enable different levels of access for different audiences.
In one session dedicated to ‘a hands-on Register demo’ at the New Generation Internet Policy Summit, mentioned in the introduction, an AI Ethics Inc. representative explained:
. . . we have been thinking about three different user groups. The first one is the technical experts who have a lot of understanding about these systems. . . . Then we have the second group, which are also very interested about these systems, they may not have that deep technical understanding or background but they might have some other angles, like social sciences expertise or expertise on different domains where we apply these systems and so forth . . . . And then we have the last of those three groups in mind who are regular citizens, who are interacting with these algorithmic systems, they might not have at all or have very little interest towards diving deep into how these systems work but they are actually the impacted people and that’s one group as well.
The main focus in the Register was on the second group, the AI Ethics Inc.’s employee explicitly indicated, since they are not technical experts but ‘want to really understand a little bit deeper how it works and have enough information on that’. However, it is the viewpoint of the third group that provides the basis of the Register – the description of algorithmic systems that should make sense to anyone, which, the employee stated, was the most difficult part to come up with.
The home pages of the Register websites for Helsinki and Amsterdam introduce several algorithmic systems in use as part of city services, stating their name, the division in the city to which they belong and a part of their overview description. To get a full overview of a particular system, however, one needs to click on them, as indicated by the ‘Read More’ line. By doing so, one gets an insight into more detailed information on the system presented through a series of headings, titled ‘Datasets’, ‘Data processing’, ‘Non-discrimination’, ‘Human oversight’ and ‘Risk management’. To be able to access these, though, one needs to click one more time on the heading, which is indicated by ‘Show More’ and the arrow pointing downwards. Once the content is ingested, it can again be collapsed, by clicking one more time on the heading.
Balancing between ‘the information needs and how to present the information for . . . different [user] groups’ was what they had tried to do, the company’s representative explained. The aim was to provide the ‘right balance for the level of information’, as it was stated in the White Paper. However, what complicated this statement is that this ‘window into the artificial intelligence systems used by a government organisation’ has two sides.
What we have just described is the ‘public’ side of the Register, built for ‘external user groups’. The Register, however, also has a backend, designed for AI teams working in the cities and technology providers, which is inaccessible to the public. It is through this backend that AI teams can curate what is displayed on the Register. The Register is supposed to aid them, in the sense of prompting relevant disclosures and considerations which should affect the development of algorithmic applications within cities. But those working inside these organisations are also in charge of operating the blinds in the sense that they decide how much they can ever be opened in the first place.
It is the awareness of the different possible audiences for the Register, as well as different possible uses of it, which defines this stage of the process. Transparency did not in this case mean that everything is opened for everyone, as was stated already during the first AI Ethics Inc.’s meeting we attended. Different stakeholders have different needs and skills and it is crucial to provide the right level of information for different parties, while also acknowledging that companies need to protect their IPs. These trade-offs and tensions, between values and between actors are settled, or at least papered over, materially through the variable ways into and degrees of access to the website. While there are legitimate reasons to see this result as a weak or compromised instantiation of transparency, it is only by modulating what transparency entails for different audiences that the ideal can survive, or appear to survive. It is this subtle modulation of the meaning of values for different audiences, once they have been chosen and matched to actions, which defines this final stage of tinkering for us.
Conclusions
In this article, we described an attempt to ‘put principles into practice’, in order to complicate assumptions about the linear and unidirectional nature of the process. Drawing on work from valuation studies and studies of practical tinkering, we argued for more attention to not just how practices are adapted to principles but how principles are adapted to practices. To emphasise this interrelation we introduced two concepts, valuable actions and actionable values and showed how the process of tinkering oscillated between the two in overlapping stages. First a gap is performed which defines a space of valuable actions; second, relevant actionable values are tentatively selected and arranged; third, valuable actions are allocated to particular actionable values, some of which are added or removed accordingly; finally, the actionable values are tuned for different audiences, by variably circumscribing their remit. At each stage of value tinkering, not only is the remit of possible valuable action modulated, but the community of involved actors and the interested audience, as it were, is expanded and solidified.
However, modulating the remit is different from saying that the value has been corrupted or ‘narrowed’: the value does not self-evidently specify which actions it should pertain to. In some sense, ‘narrowing’ is inevitable, with finite resources, actions to be taken are always less than all possible actions which could be taken. Rather than asking if practices live up to principles, perhaps we should be assessing the process through different sorts of questions: is this narrowing effective or justifiable? At what points does this narrowing become irreversible and how responsive is the process to participation from others or to new information from empirical tests? To what extent can the question of which values are relevant or who should be involved be reopened?
This work contributes to critical, sociological studies of algorithms and AI by offering a more detailed account of how algorithmic harms are addressed. We argue that is not enough to offer abstract principles or normative stances, we need to understand better how these principles might be paired with relevant practices. We hope that our concepts of valuable actions and actionable values and these questions will equip, not just sociologists and ethicists studying AI, but also social scientists studying principled behaviour more generally, to better understand and critically evaluate these processes.
At the time of the launch of the AI Register, the city of Helsinki was showcasing five projects and Amsterdam only three. This was not an impressive number and, as already noted, the projects seemed rather innocuous. It is, as of yet, unclear if the Register has led to any journalistic investigations, citizen awareness or increased participation in civic life – practices which the value of transparency is assumed to unlock. We could judge this as a failure – either to hold municipal governments accountable or to enrol a large enough community of interested citizens and engineers, but at the same time the Register set a powerful precedent. In the EU AI Act (European Parliament, 2024) registers for algorithms are offered as a central tool for addressing higher risk algorithms. Was this in some way influenced by this high-profile example (which demonstrated both the technical possibilities and their pitfalls)? In 2023, as we were in the midst of writing this article the number of projects quietly doubled, though still comprising relatively uncontroversial algorithms. So, perhaps the Register is not the end of a development process, but the beginning of something else.
We have suggested in this article that we must question some taken for granted assumptions about ‘putting principles into practice’. It is not, we argue, a linear process of translation but an empirical experiment, in which both values and actions are put to the test. Values, in John Dewey’s understanding, are not pure things in an abstract realm which is somehow higher (and prior) to everyday life, they are hypotheses to be tested through experiments in the world and these experiments ‘don’t just test hypotheses they create new objects and arrangements of objects and instruments’ (Dewey, 1929, p. 191). Thus, perhaps we should not judge these actions by reference to unchanging values but for what they might lead to in the future.
Footnotes
Acknowledgements
The authors would like to thank Minna Ruckenstein, members of the Datafied Life Collaboratory (
) and C. F. Helgesson for feedback on earlier versions of this article. They would also like to thank the anonymous reviewers for their constructive feedback and their informants, without which this work would not be possible.
Funding
Sonja Trifuljesko’s ethnographic research was carried out as part of the Re-humanizing Automation Decision-Making project (Finnish Academy, grant 332993) and the Algorithmic Culture project (Kone Foundation;
). David Moats’s collaboration with Sonja was as part of the Project Reimagine ADM, supported by CHANSE ERA-NET Co-fund programme, which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme, under Grant Agreement no 101004509.
