Abstract
This paper examines ambiguity within AI practice, arguing for an ethics of AI which stays with fundamental ambiguities and accounts for their complex socio-material entanglements. However, common approaches to responsible governance of AI are often predicated upon notions of predictable pipelines and static outputs which are assumed to be easily describable and cleanly structured. Drawing upon empirical findings which challenge these notions, and conceptual tools from Simone de Beauvoir's Ethics of Ambiguity, I illustrate how AI [ethics] can be better understood as grounded in ambiguity and propose reframing ambiguity from a failure or risk to a core facet of the study and governance of AI. I report on interviews with 23 AI practitioners, combined with observations from an ethnography of an AI practitioner based in an industry AI lab, examining their motivations, aims and actions in developing and implementing AI models. Practitioners described the impact of local, epistemic and systemic constraints, employing heuristics, intuition and creative problem-solving to navigate embedded, inherent ambiguity and uncertainty across material and practice. Building on these analyses, I propose that engaging with ambiguity in the study and ethics of AI can provide productive sites for ethical reflection and governance.
Introduction
Artificial Intelligence (AI) practice constitutes a form of meaning-making, producing artefacts that have wide-ranging implications for human decision-making and freedom to act. In this paper, I examine how ambiguity and uncertainty can arise within contemporary AI practice, often veiled or treated by both proponents and critics as a problem to be fixed. Presenting findings from an empirical study of AI practitioners across industries and domains, placed in dialogue with literature from Science and Technology Studies and Philosophy of Science, I argue that ambiguity forms a neglected aspect of AI epistemology where meaning is negotiated in human-machine entanglements, and I propose that these entanglements serve as sites for productive engagement with the ethico-politics of AI. The implications of treating ambiguity as a problem to be solved are profound, foreclosing the ability to critically examine, challenge and work with AI by neglecting a critical aspect of AI pipelines and paradigms of knowledge construction. Staying with ambiguity provides a starting point for studying and critiquing AI, helping to examine how epistemological practices of AI form sites of social ordering and power. Although ambiguity is often understood as presenting challenges for ethical AI practice, it equally represents opportunities for ethical reflection and forms an essential component of AI practice. Even the work of a single research team working on computer vision is part of a much broader constellation of various actors, processes, materials, and relationships, a complex and distributed ‘sociomaterial assemblage’ (Suchman, 2007: 268). In engaging with ‘ambiguity as cause and effect’ (Suchman, 2012: 49), ‘unpacking’ the assemblage from this vantage point, we can better understand the relationality of different facets making up AI practice.
Sharing findings from an empirical interview study of 23 AI practitioners from across multiple domains and organizations, together with an ethnography of an AI practitioner based at an industry lab, this paper illustrates how the socio-material practices of AI practitioners involve navigating systemic factors such as resource constraints alongside the complexities and contingencies inherent to collapsing complex phenomena into AI models. I discuss how practitioners employ creativity, intuition and relational heuristics to navigate the ambiguity inherent to their work and inform their moral decision-making. I draw on these findings to highlight how ambiguity is central to AI ‘ethico-onto- epistemology’ (Barad, 2007: 90) ‒ its intertwined, dynamic ethical, ontological, and epistemological entanglement, where distinction originates in the chosen angle of analysis. Accordingly, although AI entanglements span supply chains, disciplines, geographies and histories, in this paper I focus on the vantage point or ‘agential cut’ (Barad, 2007: 4) of contemporary AI practitioners. Proceeding from this view of AI as a form of socio-materially shaped world-building, I employ conceptual tools from Simone de Beauvoir's Ethics of Ambiguity (2015) to inform my interpretation of empirical findings. In demonstrating how the Ethics of Ambiguity can help account for ambiguity in critical engagement with AI, this paper seeks to recontextualise the study and ethics of contemporary AI practice by providing insights into the embodied cultures and materialities described in the study and broader literature. Rather than attempting to artificially solve or define ambiguities, these should be embraced as an opportunity for examining assumptions, identifying impacts, and introducing deliberative practices.
Ambiguity in AI practice and infrastructures
Despite claims to objectivity, scientific knowledge production is highly shaped by its local contexts (Galison, 1987, 1995) in fundamentally co-constitutive socio-technical processes (Latour et al., 2013; Stengers, 2000). AI knowledge production is no exception, as illustrated by the multiple, contextually situated ways in which the field itself is understood (Monett and Lewis, 2018). This is seen in the treatment of parameters extracted from data points representing complex human and social systems, which are central in the process of AI design and development. Transformed into vectors, data forms the ‘moving substrate of Machine Learning’ (Mackenzie, 2017: 72). The vectorization of goals, as observed in studies of the life sciences, ‘modifies the object's visibility’ (Lynch, 1988: 229). Stark and Crawford (2019) noted a similar dynamic in the ‘defamiliarization’ that ‘data artists’ employ to convey the aesthetics of data-driven surveillance to their audiences, in this context employed as an intentional device to prompt reflection. This translation work from goal to output involves qualitative judgements about how to measure a phenomenon. Barocas and Selbst (2016), for example, illustrated the impacts of seemingly arbitrary decisions made in choosing which variables to include, which they argue could be done ‘in such a way that happens to systematically disadvantage protected classes’ (678). AI practice is underpinned by implicit assumptions such as model agnosticism, or that models of AI can be applied successfully independent of context (Molnar et al., 2020; Catanzariti, 2023) and obscured accountability. These assumptions and practices are dynamic and entangled, a contextually intricate ‘socio-material assemblage’ (Suchman, 2012: 50) of various actors, processes, materials, and relationships. Examining these contexts helps unpick carefully manicured images of seamlessness and cross-context model generalizability, which tend to be translated directly into AI ethics work.
Ethnographic studies illustrate AI practice as an iterative, non-linear form of knowledge practice shaped by an array of infrastructures and layers of abstraction (Mackenzie, 2017), built upon epistemic legacies of positivism and empiricism which are further shaped by negotiations of expertise set within broader power dynamics (Moss, 2021). Crawford and Paglen (2021) examined the politics of ‘objectivity’ in computer vision systems, charting how this is negotiated across multiple layers that are often hidden from view. Previous work has also demonstrated how ambiguity and obfuscation surface in different ways in AI practice as artefacts of the socio-material contexts of its development (Chen and Zhang, 2023; Leonelli, 2016; Widder and Nafus, 2023). In an ethnography of two AI labs, Hoffman (2017) identified three sources of ambiguity pervading practice; ontological ambiguity which required translating a conceptual object into a practical reality, epistemological ambiguity referencing inconsistency in evidencing knowledge claims, and application ambiguity, or uncertainty regarding the eventual value of a method (Hoffman, 2017: 713). Meanwhile, uncertainty in AI typically refers to statistical measurements that regard uncertainty as a measure of probable outcomes – aleatoric uncertainty, or epistemic uncertainty, where uncertainty arises due to a lack of base knowledge, for example, due to a lack of data (Hüllermeier and Waegeman, 2021). These treatments of ambiguity and uncertainty provide valuable insight into how these facets are conceptualized in discrete elements of AI practice. However, they focus on fairly narrow, specific steps within AI pipelines, neglecting the ways in which these steps draw together within systems shaped by power relationships.
This tendency to segment and modularize AI work which has much more complex origins extends to discussions of ethics in AI work (Widder and Nafus, 2023). Miceli et al. (2022) critiqued the restriction of ethical considerations such as bias to narrow definitions, arguing that by framing socio-ethical considerations as problems, we imply the existence of corresponding solutions whilst excluding consideration of the impacts of power structures and socio-historical conditions upon ethical concerns. Similarly, while ‘fairness’ has been used to address issues of disability in the design of AI, Bennett and Keyes (2020) considered how this ideal risked reinforcing existing power dynamics. As with technologies that have been biased against women, people of colour, and members of the LGBT community (Whittaker et al., 2019), the failures to consider structural injustices in design and the ethical framing of ‘fairness’ favoured processes suited to more readily identifiable (and otherwise-privileged) disabled people, while harming and supporting the marginalisation of those considered outliers. Without recognising relationality and the inequities within and between populations, these attempts to achieve ‘fairness’ risk becoming new modalities of oppression.
Scholars have also brought attention to the vast yet invisible workforce that underpins the field, most notably in the form of gig economy workers who label data or direct these interactions (Gray and Suri, 2019). This workforce can include (but is not limited to) domain experts providing input to project framing, data workers, data architects, software engineers, User Interaction designers and so on, often distributed over a global landscape characterised by material and socio-political inequities (Catanzariti et al., 2021; Sambasivan and Veeraraghavan, 2022). Recognition of these inequities primes considerations of how and for whom AI is developed, given that domain expertise is created, drawn on and redrawn by individuals of certain demographics whose concerns may preclude those rendered invisible by the systems within which they operate. In effect, the AI sector in its current state builds on and exacerbates epistemic and ontic injustices – especially given its continuous proliferation – despite any framings of objectivity (often due to obscured technical and socio-political constraints) or intentions towards ‘social good’.
Unlike ‘objective’ approaches which foreground values like ‘fairness’ or ‘social good’ as universals and rights and obscure power dynamics, relational theories of ethics (such as care ethics) centre on the role of reciprocal relationships (whether individual to individual, individual to community, individual to machine and so on), conceptualising values as situated in these relations between agents (rather than being centred on one specific agent; Van Wynsberghe, 2013). Drawing from these relational theories, Birhane (2021) suggested moving towards an ethics of AI centred on human relations, on the communities who are disproportionately impacted, and acknowledging structural inequalities and the hidden labour inherent in how technologies are developed. Framing concerns like inequity as narrow, discrete problems, abstracted from the power dynamics that create them in the first place, risks furthering the illusions of a quick fix (Hampton, 2021) which were introduced above. Engaging with this relational turn, I consider how conceptual tools from the Ethics of Ambiguity can help us ‘disentangle’ elements co-constituting AI practice, by unpicking and describing the constituents of the socio-material entanglements of interest (Bratteteig and Verne, 2012).
The ethics of ambiguity
Throughout this paper, I draw parallels between de Beauvoir's characterization of human existentialism, which argues that humans create meaning via their actions and ‘existence precedes essence’, and these ambiguous, uncertain and ‘agile’ ecosystems and processes of AI design, development and deployment. Building on a longer philosophical tradition of phenomenology, from Heidegger to Merleau-Ponty, the Ethics of Ambiguity reflects the nature and substance of ethical decision-making in an uncertain, ambiguous world. De Beauvoir frames ambiguity as resulting from the practice of meaning-making, which surfaces tensions between human aims (or transcendence) and ‘facticities’ (material constraints/facts). As my findings in the following section illustrate, much meaning-making in AI practice happens in the space between known facticity (or preset material things) and the creation and assignation of meaning which happens in data work, modelling and model evaluation. De Beauvoir argues that ambiguity in ethical decision-making arises because of tensions between the material constraints of facticity and the aim to transcend this facticity and create alternate future possibilities.
Furthermore, this ambiguity is negotiated relationally, inherent to the process of continually co-constructing meaning in a world where none inherently exists, finding a connection to the world through the ambiguous relationship between the self and the other (or the ‘intra-action’). The qualities of vitality, sensitivity and intelligence are a result of engagement with the world. These qualities are tempered by the nature of our embodied existence; however, materiality does not define these qualities but rather is our source of relationship with the outside world. More important is how we respond to the world given our capabilities. De Beauvoir lays out the ambiguities of ethical decision-making as resulting from the multiplicity of interacting facticities and values that each person necessarily experiences differently, yet relationally. In this view, ‘morality resides in the painfulness of an indefinite questioning’ (de Beauvoir, 2015: 133), requiring constantly working towards the joint freedom of the self and others. AI can create distance between the self and the other, reducing reflexivity and empathy. People/systems given power become ‘as a transcendence’ or treated as having superior knowledge/capabilities, thus considering ‘others as pure immanences’, and in doing so assume ‘the right to treat them like cattle’ or decide what the correct action is without input from the affected group (de Beauvoir, 2015: 110). By converting people into objective measures, we reduce or discard their worth and we make it far easier to disregard details that might be important.
In the sections that follow, I examine the facticities which form the limits of practitioner agency, those ‘clusters of constraints’ that shape practice (Galison, 1995: 15). This vision of ethics is framed in terms of its immediate intra-actions and historical contingencies rather than abstracted concepts or anticipated outputs as seen in rule- or consequence-based ethics. That is, as with AI practice, ethics is a practice of meaning-making in response to contingencies and acting in the face of uncertainty and ambiguity. From this perspective, values and practices are intricately interwoven and although practitioners have aspects of ‘…existence that are situational and factual’ (Riggs, 2019: 6), their actions also surface a desire to overcome these constraints to pursue their motivations and realise their values. This observation resonates with those offered by empirical studies of ‘ethics on the ground’, which have illustrated the divergence and tension between official frameworks of ethics and the personal engagement with ethics on the field, with external frameworks having a negative impact if personal practices are not accounted for (Heimer, 2013: 377).
In focusing on the tension between freedom and facticities, de Beauvoir highlights how a choice in one direction might result in harm in another, but rather than viewing the uncertainty of outcome as a failure of ethics, to de Beauvoir this is the core of ethics. As opposed to asserting universal values as an end-product, with values ‘temporarily and precariously grounded in the particular needs and projects of each particular human community’ (Oganowski, 2013: 6), the goal is to enable and protect freedom of the self and others, especially given that freedom is never static and thus all bear some responsibility in achieving it. De Beauvoir refers to two kinds of freedom; the existentialist kind which people have by virtue of existing, to make their own decisions (subject to contextual pressures), and moral freedom, the ability to engage in ethical decision-making, which has a corresponding accountability attached; does someone engage with ethical decision-making, or do they try to evade their responsibility? The Ethics of Ambiguity primarily refers to the moral kind of freedom, recognising that in moral decision-making, every decision has the potential to fundamentally impact other people. In this paper, using findings from a study of AI practitioners, I consider how the Ethics of Ambiguity can provide a useful conceptual tool for thinking with ambiguity in AI practice and ethics, and navigating the complexities of engineering epistemic tools which shape others' freedom. I highlight the entanglements of practice, power structures and impacts, and the parallels between the practices of constructing AI models and the philosophy of the Ethics of Ambiguity. I also examine the role of intuition and heuristics in navigating AI ambiguity and uncertainty, and foreground de Beauvoir's account of navigating decision-making as an alternate way of understanding AI practice and ethics.
Examining ethics and ambiguity in AI practice
This section illustrates and examines AI practice at a local level, employing concepts from the Ethics of Ambiguity to inform the exploration and interpretation of findings. I discuss semi-structured interviews with 23 AI practitioners recruited from across several institutions. I directly contacted the initial 12 participants due to their expertise in AI (key knowledgeable sampling; Patton, 2014), and then proceeded to interview a further 11 referrals from initial participants. All had doctoral-level training or equivalent experience in AI. I spoke with participants working in academia, Big Tech (corporate and research), Small-Medium Enterprises and start-ups. Participants held heterogenous roles including Chief Technology Officer, department head, postdoctoral researcher, and developer, and all worked directly with AI models as part of their role. Their demographics reflected those of the AI domain more broadly; 80% male, majority white and from global minority countries, which forms a limitation of this study. Interviews were conducted online or in person and lasted from 47 to 118 min. This was complemented with a one-week focused ethnography of an AI practitioner working on a computer vision project in an industry AI lab, and auto-ethnographic reflections drawn from two years within the same lab. This research was approved by the ethics board of the author's department and unfortunately constrained by the effect of the COVID-19 pandemic, which particularly impacted the scope of the in-person ethnographic work. To protect participant privacy, I have assigned pseudonyms and redacted sensitive details. I used Reflexive Thematic Analysis to structure my analysis of the interviews and (auto) ethnographic data, a variant of thematic analysis that facilitates examination of patterns in the data but is flexible enough to enable engagement with emergent themes and relate findings to ‘wider socio-cultural contexts’ (Braun and Clarke, 2012). This involved regular meetings with members of my project team to discuss themes.
Navigating the socio-material facets shaping practice
AI practice is fundamentally shaped by structural constraints, including access to resources, which require creativity, intuition and experimentation to overcome, even for Big Tech projects. Practitioners described how such constraints reflected the previously discussed inequalities in the field, posing a huge problem to researchers working at academic labs or industry labs that were not part of one of the dominating corporations. Jason, an AI researcher whose research focused on computer vision using Deep Learning and Reinforcement Learning, reflected on how the limitations of hardware had a foundational impact on the research decisions he made in terms of direction, shifting his focus towards theoretical work to minimise the impacts of lack of access to necessary computational resources. He told me how he preferred taking a ‘more theoretical’ approach, feeling that this allowed researchers to break new ground in new areas ‘without needing all this computational power, who has access to that power?’
Material constraints shape technological affordances and practitioner decision-making, forming features that permit or encourage certain types of interaction with a system at the expense of others. Material resources are potentially co-constitutive with the deliberation and decision-making around AI practice. Perhaps this recognition of the ‘brick and mortar’ of AI itself acts as a sort of ethical affordance, directing deliberation via the limitations it places on practitioners’ agency, creativity, and reflexivity (Schoenherr, 2022). Data work is a ‘kind of art’ (Muller et al. 2019: 8) that utilises a combination of the practitioner's intuition, domain knowledge, and a trial-and-error process of designing predictive features, to manage the uncertainty inherent to the work (Kwon et al., 2019; Chen and Zhang, 2023). Julie (AI practitioner working on a multinational AI platform) reflected on how data collection for projects required a conscious engagement with the intentions of use for the data, anticipating issues that might arise and need to be ‘mitigated’: …there is also the data-collection side, collecting the data in such a way as you have imperfect data collection too…you have to make these decisions consciously of what kind of algorithms you’re going to use and what kind of problems you can already mitigate during data collection.
This approach to the craft of data work has fundamental epistemic implications (Thomer et al., 2022). Missing data is a problem in constructing ground-truths, and practitioners might have to impute data themselves in order to run a model whether by using other models or their own intuition, resulting in greater complexities of data curation. For example, expert-labelled ground-truths can be difficult to obtain due to the time costs involved, with knock-on effects (Jaton, 2021; Kang, 2023). Julie reflected on how the potential for data to be utilised using AI to generate predictions also had risks from a lack of insight into these large datasets, the biases they encode, and what modelling them might entail. She was particularly concerned that by employing other AI to model their data, data owners would believe that this was equivalent to understanding it, even without access to any ground truths: People have massive amounts of data, which they don’t really understand, can now use it with deep learning. But on the other hand, you still don’t understand your data, then you have this black box, which is telling you something about this data, but you still don’t understand what's going on.
Furthermore, practitioners often must be creative in identifying alternate sources of information, for example, CCTV coverage of the behaviours in which they were interested (Muller et al., 2019). This approach has proven risky, one example being when the IBM Watson team tried to develop a model to predict personalised cancer treatments based on the analysis of the oncology literature (Ross and Swetlitz, 2018). Here we see the tangible impacts of the collapse of human complexity into subsets of data points. The IBM Watson team discovered that when reading articles, physicians often utilise information that is not the primary point of the study, adapting their care in a way which is qualitatively obvious, but obscured when considered based on data alone. For instance, when the FDA released a drug personalised to specific genes, only 4 out of 55 participants in the study they cited had lung cancer. Physicians now knew to routinely screen lung cancer patients for this gene, but such a small percentage would likely not be picked up by the AI model. To address this issue, the team created fictional profiles to train the algorithm (Strickland, 2019), which proved untenable in such a high-profile, high-stakes domain. However, this employment of fiction to train a data model showcased how experimental processes were applied to demonstrate the utility of AI, complicating purportedly clear ideals of workflows.
In addition to data poverty of the qualitative and quantitative kinds, inequalities in access to resources impact what sorts of projects can be developed, and therefore how much input practitioners have in shaping outputs. Lukas (academic researcher) described how the pressure placed upon him by hardware constraints was one of the primary concerns in developing and testing his models: …you believe it's useful because you’ve seen it succeed on the dataset but you kind of neglect that you tried it out on five others, and it didn’t work. Which is fine because not every algorithm has to work perfectly on all of them, but given all the constraints imposed on you, time, computational resources, you often times you don’t have the abilities to test it out as much as you would like to, so you have this internal mental challenge of trying to test your ideas, but trying to be scientifically as rigorous as possible.
Furthermore, in contrast with traditional software development workflows (although, even these are subject to change), AI workflows do not have linear processes which aim to reach a very structured end goal. Rather, stated end goals were described as rather vague, and periodically updated based on the results of various iterations of work. Joshua (co-founder of a successful computer vision start-up) described the process as ‘…fairly free-form and experimental’ and difficult to plan for, because ‘as with all machine learning models it's very hard to predict whether or not a change will be positive or not’. He summarised this way of working as ‘you basically try stuff, throw stuff at the wall and see what sticks’. Alec (head of data science at a multinational company creating ‘out of the box’ AI solutions) went further, to suggest that the entire process is unclear and unpredictable: Often at the start of a project you don’t know where you’ll get led by the data, and you look into what's possible rather than, if you’re building a product you might start with a list of features which you steadily add on.
Even in multinational technology corporations, and in the product divisions of these, the nature of AI techniques as vague and difficult to predict outcomes impacted the way in which projects were approached and planned. George worked as the head of a Big Tech data science department (in the corporate branch of the organisation), which operated according to a ‘matrix-driven’ ethos, where practitioners were given goals over a time period (such as a year) and just told, ‘you must increase x by two points this year, something like that’. This statement alluded to demands to achieve concrete but ambiguous (and often perhaps arbitrary) end goals, priming practitioners to demonstrate some sort of progress with processes that were difficult to predict and subject to constraints as described earlier. Similarly, in academia, Lukas described how his research on AI theory inherently involved uncertainty, especially given the black-box nature of this approach: Because I know that it's kind of fiddly, and unreliable, and black box to some extent, I design my workflow around that that obviously means trying my ideas as early as possible and ideally as rigorously as possibly, especially on toy domains where you can control all the influencing factors… could get a lot better at that but that's the idea at least.
Reflecting on this inherent unpredictability, Lukas (academic researcher) spoke about how this impacted his work practices and the algorithms his team worked on, saying that ‘we don’t know what will work what won’t…sometimes we just try out things until it works basically’, describing how this meant that a lot of his work was based on building intuition over time, necessitating practitioners gain ‘a little bit of understanding and intuition of the things which might work and what things might not work’. Pulled together with the uncertainty of the process, even experts in the field would then struggle to thoroughly document and explain their work: We have a better understanding but it's far from perfect, and so it's impossible to predict and it would be really difficult to write a guideline on how to use an algorithm. It's hard even to come up with all the cases of when it might work or when it might not work, it's a very difficult problem I think.
Even the more iterative methods and tools used for software development seemed too structured for AI work. Alec described how he got to the point of losing patience with the well-known project management tool his teams had been using and had even recently led a boycott of it, as it was ‘good for making products but not very good for doing science or doing data science, when you don’t know where you’re gonna go’. By acknowledging the inherent uncertainties and ambiguities in the work, it became clear how object-focused understandings of AI embedded within the design and deployment of productivity-enhancement tools constrained capacities to meaningfully engage with actual practices. In the end, his team moved to using a time management and tracking app ‘for monitoring how many hours we spend on things’. This experience extends to other aspects of AI practice, with data curators needing to engage in ‘craftwork’ to get the project management tool Jira to work for their processes (Thomer et al., 2022). Indeed, George remarked how his industry team's data science work differed from standard software engineering: There's much less of this sort of day-to-day coordination like you would with a traditional software development project where everybody is working on the same thing, and you have a design that you are implementing in pieces.
These reflections echo observations from across the history of the field, characterising AI practice as experimental and hard to predict (McDermott, 1976), as ‘craftwork’ (Suchman and Trigg, 1993), and ‘highly iterative and exploratory’ (Patel et al., 2008: 669). They also surface tension in contemporary AI, particularly in applied contexts, a tension representing its uneasy place between science and engineering, between studying a phenomenon and building an output (Parnas, 1999). This experience of compounded materialities was navigated with the best of intentions but risked unintentionally assigning outputs the status of ground truth, in seeking a model that ‘works’ given the computational resources. Such occurrences are obscured when the datasets created through these processes become the basis for additional modelling or foundational to developing real-world applications. In other words, if subject to a context shift, it results in ethical debt, since these ‘working models’ are used ‘without proactively identifying potential ethical concerns’ (Petrozzino, 2021: 205). Stengers (2000) examined the efforts of scientific communities to preserve autonomy and demarcate boundaries to abstract their work from political and social concerns, preserving the aesthetic of objectivity. Stengers noted that this results in inherent tensions which renders the scientist as a ‘vector of creativity’ which would be antithetical to a more critical stance (Stengers, 2000: 5). I delve deeper into this relationship between compound materialities, resource constraints and ethical implications in the following subsection, reflecting on how dialogue around ‘AI craftwork’ can serve to either open up important facets of complexity in AI practice and onto-epistemology or to further obfuscate them.
Mediating AI entanglements via values and heuristics
The findings discussed so far illustrated how unpredictability and uncertainty pervade the development of AI models, with AI practice described as largely unstructured and iterative even within large, corporate institutions. Compounded by resource-constrained AI materiality, and qualitatively distinct from sister domains such as software engineering, these characteristics are reflective of contingencies inherent to the ‘challenges of social complexity or the unpredictability of the future’ (Best, 2012: 88). Despite the tensions, practitioners enjoyed the challenge posed by constraints, finding reward in the creativity required to navigate them. Alec described the main thrust of his practice as being to ‘put pieces together, different data, to try and build a picture’, using the language of craft and art to explain the complexities and nuances of his work. The values motivating practitioners form small pieces of a messy, uncertain puzzle, used to guide ethical heuristics and navigate the distributed nature of the responsibility webs they are embedded within. The constraints and contexts of AI, where the values of the scientist meet the implications of the engineer, create a messy, complex set of socio-technical entanglements and ambiguous notions of responsibility. In this section, I examine some of the ways in which practitioners attempt to navigate these entanglements in practice, drawing upon heuristics and guiding narratives to overcome the gaps introduced by the complexities described above. The fundamental complexities and ambiguities characterising responsibility in AI practice can serve to reduce concepts of agency when viewed at an abstract, absolute level. This can occur either intentionally or unintentionally. For example, the ambiguity between abstract research and implementation in the real world can potentially obscure the potential impact of practitioners’ work, or be easily framed as obscuring it, and thus affect their sense of moral responsibility, chiming with the observation of Louis (senior researcher in an industry AI lab) that ‘a significant portion of researchers…if they don’t work on real world data think, well it doesn’t really apply to me’. To apply de Beavoir's lens, the fundamental ambiguity posed by AI is in the confluence of the facticities (or material constraints) and values (or aims) of the teams and broader socio-material and geographic environments in which it is designed, developed and implemented. These contingent outcomes are both unclear and unpredictable, however, rather than a problem to be solved, this is a feature to be in dialogue with and a site of ethical responsibility.
AI methods can differ in how they prompt consideration of implications for the agency of impacted groups. Alec reflected on his discomfort with a personalisation project he worked on, combining large datasets scraped from the web with other sources shared by companies, to build models which segmented groups based on shared characteristics. He told me ‘… I don’t know how I feel about some of it… it's sort of systematic, on a big scale that you’re doing stuff about people, if you did it in real life it might seem a bit creepy’. Phan and Wark have described this as possessing a ‘creepy factor’ (Phan and Wark, 2021: 4) which gives the perception of personalisation a ‘dull sheen’ (Phan and Wark, 2021: 5). They contrast this dull, perhaps neutral, view of personalisation at the individual level with its ‘profound shaping effects on our societies’ (Phan and Wark, 2021: 5). This awareness of creepiness can serve to desensitise the perceiver to the more disturbing implications of such emergent socio-technical assemblages. This applies to the perspective of the practitioner; the systematic scale of the model is concerning, but this concern is examined at the individual level, defanging it of ethical impetus. The diluting effect of framing ethics at the individual level also illustrates a potential flaw of the subjectively-based ethical heuristics which are explored further below, in diffusing some of the more concerning considerations that might be posed in a higher-level analysis. Similarly, certain AI approaches can be felt to be more amenable to be employed as a mechanism for distancing architects from negative outcomes of their models, underpinned by claims of objectivity and distance. Jason expressed concern about how using AI approaches such as ‘Reinforcement Learning’ introduces limited accountability: I think more often than not with that, because we’re using black-box methods and there's very little accountability in there, I think they also have the power to be incredibly dangerous. You’re dependent on something you don’t have any control over or know how it works, how you understand it, how it can be used if the people who developed the algorithms who essentially have that power and they’re relying on it.
This socio-technical constellation included humans at all stages of the process, beyond users or affected groups to organisations who might potentially misuse the model once already created. Thomas summarised the socio-political implications that would potentially result from this sort of data coil: Essentially, more and more people who do have access to the information have more and more power, and then value is then entered into this feedback loop of how people use them, and how people come to rely upon them.
In their intuitive approach to ethics, practitioners crafted imaginaries, cultivating their processes of ethical deliberation, and understanding, reminiscent of the role that art can play in broadening moral imagination (Kieran, 1996). These heuristics enabled practitioners to transcend the limitations imposed by the autonomous character of the field, to approximate the situational contexts deemed necessary for ethical deliberation. Alec's expression of the roles of curiosity, self-direction and ambiguity in his work led to a discussion of how values might be made easier to reflect on in practice, and thus incorporate into model design and development. He emphasised that despite the complexities posed by ambiguity, the answer was not to add more software, ‘I’m not sure I’m always a fan of having software, ‘a tool to solve everything’, whether it's more like, you establish values by doing stuff, by acting in a certain way and not being a dick, you know’. You need to have workarounds, and fall-backs and heuristics which always work or whatever. There's a lot of considerations that as I’m building things…always in the back of my mind always is if I was the user of this, would I like the way that it's being built, and like would it benefit me, would it disadvantage me, um…so I think that's yeah, always playing on the back of my mind. …my parents are patients, my friends are patients. So, it's constantly thinking about what is the ultimate goal of what we are doing. Considering all of these things, it's a second nature, it's kind of natural that, you know, ethics to me is…is something obvious I guess.
In constructing an imagined scenario, usually based on the experiences of the practitioner, practitioners created an opportunity for anticipation and a point of critical reflection on ethical practices in AI system development. This method of empathizing with an unseen third party has been linked in previous research to ethical decision-making (Hoffman, 2001). Mencl and May (2009) demonstrated that ethical decision-making was influenced by psychological proximity, and its impacts on empathy, as well as physical proximity. These approaches demonstrated an eagerness to engage with ethical reflection and other ongoing processes of decision-making in a way that explicitly considered impacts. Moreover, these were embodied approaches to deliberation, which can confer certain benefits. Impersonal choices are argued to activate utilitarian responses, whereas empathy can short-circuit this and elicit more situated decision-making, with empathic concern perhaps even being crucial to moral decision-making (utilitarian tendencies may be indicative of diminished empathic concern) (Gleichgerrcht and Young, 2013). This impact of distancing on ethical decision-making is of particular importance when we consider the constraints already imposed by distributed modes of responsibility. However, empathic anticipation can be imbued with the biases of the empathizer; Kristoff reflected on how the outcomes of the algorithms he built were a direct representation of his worldview and moral character in reflecting on how a lot of it is from my own personal experience which comes back to like when I’m building a model, essentially what model the algorithm outcomes will be. These heuristics demonstrate methods of attempting to tackle ethics from the vantage point of someone engaged actively with the ambiguity of ongoing practice.
In making choices that potentially aid one person's freedom but curtail another's, ‘one finds himself in the presence of a paradox that no action can be generated for man without its being immediately generated against men’ (de Beauvoir, 2015: 107). Moral freedom requires that the decision-maker recognise both their ‘individuality and role in the collective human community’ (Oganowski, 2013: 6), requiring recognition of practitioners’ embodiment and relationality. This account of ethics serves multiple purposes; to critique the simplistic categorisations that can happen in AI, and to provide a tool for viewing the practices of AI, which do not reduce them to arbitrary categories for ease of framework design. De Beauvoir sees ambiguity in the nature of humans as concurrently ‘bodies’ or observable by the Other, and ‘lived’ or experiencing ourselves moment-by-moment, but this ambiguity is not framed as a problem or duality to be dissected, but rather as a fundamental characteristic of being. ‘Ambiguity must not be confused with that of absurdity. To declare that existence is absurd is to deny that it can ever be given a meaning; to say that it is ambiguous is to assert that its meaning is never fixed, that it must be constantly won’ (de Beauvoir, 2015: 160). Our existences are disclosed, or made tangible, by virtue of our relationality, with ambiguity a natural result of the multiplicity of human experiences shaped by diverse facticities and values.
These findings highlight how ethics in AI contexts involves navigating material constraints, interwoven with impacts of the political economy of the sector, whilst following one's own motivations and engaging with the external values imposed in numerous ways. There are different tools for doing this, from ethical heuristics and intuition to training and compliance mechanisms. However, attempting to parcel off aspects of these into tractable problems risks flattening, solutionism and diffusion of ethical impetus, all while legitimizing the outcomes of any set processes. Whilst indicative of the relationality of practice, the diffusive effect of heuristics such as empathetic and sympathetic deliberation illustrate a need to move beyond limited (and individualistic) methods of ethical reflection and anticipation. Thinking with de Beauvoir's concepts can help identify, even centre, the fundamental ambiguities of AI practice, without framing these as a problem in need of a solution.
Navigating the ambiguity of AIs
As we saw in the previous section, practitioners use the language of craft and art in descriptions of their practice. Alec was ‘constructing pictures’ from the data he worked with, whilst Lukas sheepishly told me that his AI work was driven in large part by ‘intuition’. The complexities and ambiguities of technologies have long prompted comparisons with art, after all, ‘discovery requires aesthetically-motivated curiosity, not logic’ (Smith 1977: 144). This aesthetic refers to the experience of the practitioner, the [post]phenomenology of technologically-mediated exploration. Indeed, these motivations of intellectual curiosity, exploration and challenge characterise the discussions I had with practitioners. In the same vein as art, AI ‘contributes to knowledge production by exemplifying aspects of the world that would otherwise go overlooked…inviting novel juxtapositions’ (Gorichanaz 2020: 2). However, art goes beyond this to explicitly engage with ‘exposing and even challenging societal assumptions’ (Gorichanaz, 2020: 2), whilst AI practice largely conceals this, often in service of a legitimising narrative of objectivity. This implicit fusion of moral and epistemic values, often in the form of innovation, creation, and knowledge production, formed a core motivation to pursue the roles and types of work that practitioners undertook. While there have been historic attempts to separate ‘rational’ from ‘moral’ values, relational epistemologies do not create this boundary, rather considering that our epistemologies are fundamentally situated within our relative individual, contextual standpoints. In this way, epistemic values are difficult to truly separate from moral values, when one accounts for the nature of epistemic responsibility – and this view informs my analysis here.
These tendencies towards intuitive processes seem paired with a desire for self-direction (freedom of thought and/or action) – Dewi followed up a description of his surprise at the prominent role of the gut feeling in AI practice by telling me he was motivated by ‘coming up with innovative methodologies’. Similarly, Julie described how she enjoyed ‘coming up with new ideas of how to do things’ in ways that creatively responded to immediate problems, evocative of craft work. Comparisons of AI practice (including data work) to craftwork span several decades (Suchman and Trigg, 1993; Thomer et al., 2022). There was perhaps a tension between enjoyment of this craft of AI, of sculpting data and models, with perceptions of data work as a distinct task, perhaps impacted by ‘residual conventions and perceptions in AI/ML drawn from worlds of “big data” … and of viewing data as grunt work in ML workflows’ (Sambasivan et al., 2021: 2). In essence, in the chimaera created by a combined understanding of AI as both art and science, there is a tendency to see the products of the technological systems through the lens of human transcendence while obscuring the processes behind them. That is, AI systems are often described in terms of linear, carefully demarcated pipelines, outputting models which have specific end goals. This clean, clear construction of a single output or process might be unintentional, as through the use of ambiguous language which builds on unidentified underlying assumptions or maybe the [mis]use of descriptors intended to invoke ideas beyond the actual capacities of such systems (Phan and Wark, 2021). These framings of AI as a singular output can serve to obfuscate and diffuse accountability in a process better characterised as creative combination and crafting in a process of exploration. De Beauvoir highlights that the intended ‘object’ of these values (take for example, an AI for Good model or transparent AI framework) can become valued above the ethico-political freedom of those who shape and/or are affected by it (e.g., people contributing data to/being governed by resultant algorithms). In building AI we are already deciding what sort of knowledge individuals are privy to in their freedom/decision-making, modelling facticity to decide what information is relevant to share with people engaging with AI models. This risks similarity to an ‘attitude of distance’, which views no one solution as better than another, where present occurrences have the same status as past events, as ‘impartially contingent facts’ (de Beauvoir 2015: 81), with choice, then, being an illusion, algorithmically enacting the Other (Aradau and Blanke, 2022). De Beauvoir calls this the ‘aesthetic’ attitude, an attitude of withdrawal and discouragement rather than a truly moral view. Instead, people should be offered choices, given that making no decision is imbued with moral implications to the same degree as making an active decision. De Beauvoir instead focuses on protecting the freedom of others to act according to their values; to her, this is the core of ethics, and the freedom of the Self and the Other are intertwined, whoever these may represent within pipelines and contexts of AI. That is, ‘To be free is not to have the power to do anything you like; it is to be able to surpass the given toward an open future; the existence of others as a freedom defines my situation and is even the condition of my own freedom’ (de Beauvoir, 2015: 97).
Challenging assumptions and identifying shortcomings invites awareness of the tensions formed in the space between lived experience, and the perception/actions required in simplifying lived experience to utilise information about it. However, these tensions also risk rote casting those conducting AI simplification processes as a homogenous group, and erasing their own complex lived experience. Taking a perspective that acknowledges AI as informed by intuition, akin to art practice both in its exploratory nature and in evolving in response to contingency, allows us to better account for the complexity of the role of values in navigating the different ambiguities of AI practice. This intuition involves a multiplicity of values according to differences in the group and context; ‘these kinds of problems are dynamic and changing’, in the words of Stefan. De Beauvoir centres this multiplicity, warning against assuming we know best how to handle a situation in which we do not have lived experience by dictating values; ‘There is nothing more arbitrary than intervening as a stranger in a destiny which is not ours’ (de Beauvoir, 2015: 92).
De Beauvoir brings attention to singularity, or the inherent multiplicity of existence as ‘resistance to conceptuality and categorization’ (Parker, 2015: 2), in tension with relationality, recognising that an individual's capacity to make independent decisions is conditioned on their material situation in addition to a will. Making decisions that have material, for example epistemic, effects involves actively shaping the material situation of others and therefore their moral freedom. Quite often this problem in AI is framed as that of missing data, or a need for greater representation for example with synthetic datasets. However, perhaps instead it should give a pause for reflexivity, rather than representing a chasm to be breached. This cuts through to the fundamental tension in asserting ambiguity as a value, a tension which is key to the nature of responsibility itself; ‘what to do in each moment is and ought to be affirmed as a matter of indefinite questioning; my next step is not inevitable and as a lived agency I live this noninevitability’ (Parker, 2015: 11). In doing so, we recognise the fundamental inability of ethics to predict the future, and in fact the nature of all decisions as impacting some groups positively and some negatively. Thus, centring ambiguity provides a useful conceptual tool for escaping linear, overly future-focused, solutionist thinking around ethics, which can be unsuited for the context of AI.
Concluding remarks
Understanding ambiguity in the practice and study of AI requires recognition of its fundamentally iterative, exploratory epistemic underpinnings, in addition to the numerous socio-political factors and situated knowledge shaping practice. Bearing parallels with existentialist understandings of being, AI practice is constantly in motion, engendering difficulty in making a representative record of lived experience, further compounded by limitations introduced by other facets of practice such as data capture and access to computational resources. AI projects are fundamentally situated within this wider network of dependencies, shaped not only by their iterative, exploratory nature but also their dependence upon distributed supply chains, illustrating de Beauvoir's point that ‘no project can be defined except by its interferences with other projects’ (de Beauvoir, 2015: 76). De Beauvoir asserts that recognising the tensions resulting from ongoing processes of meaning-making, although uncomfortable, allows for a richness of experience. Given a fundamental state of ambiguity, de Beauvoir presents an ethics that grapples with the possibility of failure rather than just focusing on achieving whatever measure may indicate success. This necessarily requires recognition of the intertwined ambiguity and relationality of practice, where ‘genuine recognition – or moral freedom – is marked by uncertainty, by the possibility of failure, and by the relinquishing of individual control or mastery’ (O’flynn, 2009: 78).
In engaging with ambiguity, we are dealing with an ‘inescapability of interpretation’ (Best, 2012: 88), requiring consideration of a plurality of perspectives in being able to truly facilitate responsibility and accountability (Renn et al., 2011). Although I have focused upon AI practice, and AI practitioners, in this paper, this is to draw out the essence of the ambiguity at play in AI work, not to single out this group as solely responsible for the responsible development of AI. My aim in highlighting this is to demonstrate a potential gap in critical engagement with AI, policy development and data ethics, in the representation of AI practice and models. AI outputs have immediate impacts but crucially also shape the possibility of future projects, shaped by the materiality of computational tools and infrastructures that are often characterized by ambiguity. Thus, we need ethics frameworks informed by real-world practices, which can go beyond immediate materialities to be able to engage with future possibilities. The exploratory process, and its fundamental ambiguity and uncertainty, form an essential feature of AI which is oft-neglected in linear conceptions of AI [ethics]. That is, ethical sensibilities need to keep pace with the nature of open-ended practice including the ways in which practitioners enact norms and intuitive judgements. Taking an overly linear AI ethics approach risks misunderstanding the emergent, experimental, iterative, and ambiguous nature of AI practice. Rather, we should centre ambiguity as an unavoidable facet of AI, and a productive site for ethical reflection and governance.
Footnotes
Acknowledgements
Many thanks to the practitioners who kindly contributed their time and insights to this study. I am very grateful to Louise Amoore, Imo Emah, Aditi Surana, Alex Taylor, and Emily Postan for their feedback and help in bringing this paper to life. I also wish to offer my thanks to the editor and reviewers who gave such helpful and generous feedback, massively improving this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by Durham University via the Advanced Investigator Grant ERC-2019-ADG-883107-ALGOSOC Algorithmic Societies: Ethical Life in the Machine Learning Age. The PhD research which this article draws upon was funded by Microsoft Research.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
