Abstract
Applications of artificial intelligence/machine learning (AI/ML) in health care are dynamic and rapidly growing. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is patient and public involvement in the design of those technologies – often referred to as ‘co-design’. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, AI/ML introduces challenges to co-design that are often underappreciated. Informed by perspectives from critical data studies and critical digital health studies, we review the research literature on involvement in health care, and involvement in design, and examine the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health care. We suggest that AI/ML technologies have amplified and modified existing challenges related to patient and public involvement, and created entirely new challenges. We outline three pitfalls associated with co-design for ethical AI/ML for health care and conclude with suggestions for addressing these practical and conceptual challenges.
Introduction
The contemporary field of artificial intelligence/machine learning (AI/ML) is dynamic and rapidly growing, characterized as central to a ‘4th industrial revolution’ that commentators suggest will impact virtually all aspects of our lives (Couldry and Mejias, 2019; Schwab, 2017; Zuboff, 2019). Although AI/ML technologies are multi-purpose, they are particularly consequential in health care, where concerns range from the changing nature of the patient–provider relationship (Goldhahn et al., 2018; Topol, 2019), to the ways in which AI/ML technologies exacerbate existing societal inequities (Benjamin, 2019; D’Ignazio and Klein, 2020; Eubanks, 2018; Noble, 2018). As a result, there has been an increased acknowledgement by corporate, government, and academic actors alike that AI needs ‘ethics’. How these ‘ethics’ are meant to be established and applied, however, has led to significant debate (we discuss our own notion of ethics in this paper in ‘Theoretical approach’).
One strategy for anticipating and addressing the potential benefits and harms of AI/ML for health is patient and public involvement (PPI) in the design of those technologies, often referred to as co-design. As a category of approaches to technology development that aim to involve end-users as meaningful participants in the design process, co-design is often mobilized as a strategy to improve fairness, accountability, and transparency of algorithmic systems (Aizenberg and van den Hoven, 2020; Malizia and Carta, 2020; Whitman et al., 2018). Co-design is also closely allied to other trends in health and health care, including patient engagement, PPI and patient and family-centred care (PFCC). Co-design and its variants have a diverse intellectual and practical history, however, and have been conceptualized in many different ways. Moreover, the meaning and value of co-design are challenged by AI/ML systems, where users will always play some role in the production of those systems, for example in producing data used to train models. As such, the extent to which co-design should be considered a suitable approach to ethical AI/ML has recently come into question (Sloane et al., 2020).
Informed by perspectives from critical data studies (CDS; boyd and Crawford, 2012; Dalton and Thatcher, 2014; Kitchin and Lauriault, 2014) and critical digital health studies (CDHS; Lupton, 2016, 2017a), in this paper we outline three pitfalls associated with co-design for ethical AI/ML for health based on common assumptions arising from health care and co-design discourse. We start by presenting our theoretical approach in some detail, outlining three concepts from CDS and CDHS that inform our analysis. We then present a brief description of practices of involvement in design, and involvement in health care, leading into a summary of overarching risks and pitfalls for consideration, and conclude by outlining important directions for future research and practice in this area.
Theoretical approach
Our analysis of involvement in the design of AI/ML for health care is shaped by perspectives from CDS and CDHS. CDS is an interdisciplinary field, bringing together methods and perspectives from across media studies, sociology, anthropology, human geography, and design, among others. While the field is diverse, CDS is united by a concern with the social, cultural, ethical and political challenges posed by data, including how they are constituted within wider data assemblages (Iliadis and Russo, 2016; Kitchin and Lauriault, 2014).
A related field is CDHS. While a number of scholars have engaged critically with how health technologies (including health information technologies) have influenced health and illness (Clarke et al., 2003; Mol, 2008; Rose, 2007), Lupton (2014b, 2016, 2017a, 2017b) outlines the unique challenges posed by digital health technologies, including the ways in which they contribute to evolving notions of ‘health’, ‘illness’ and ‘care’ (Lupton, 2014a, 2016).
Three concepts in particular from these interdisciplinary domains inform the analysis of involvement in health-related AI/ML development presented in this paper. The first is ‘socio-materiality’, which indicates that AI/ML technologies are not simply digital algorithms that happen to be embedded in a variety of devices. Rather, AI/ML technologies are better understood as a collection of digital algorithms, technological devices, telecommunications infrastructures, human goals and human rules that cohere together into ‘assemblages’ that represent specific AI/ML technologies (Kitchin and Lauriault, 2014). If one is to understand the ethical significance of co-design for AI/ML technologies, one must acknowledge how deeply intertwined they are with the human and material realities that shape their existence in the world.
The second concept is ‘surveillance’, which has come to signify the consequences of mass data collection on human experience and action, spurring the development of an entire field of research referred to as surveillance studies (Cheney-Lippold, 2017; Lyon, 2010). The notion at the root of studies of surveillance is that the act of collecting data about peoples’ activities has significant influence on the activities in which they engage. This is true for both individuals and populations and has novel implications in health and health care contexts.
The final concept influencing our analysis is that of the ‘political economy’ of data and digital technologies, referring to the particular economic assumptions and institutions that are supported by AI/ML technologies and the organizations by which they are developed and used. The concept is more accurately described as ‘political economy’ as opposed to just ‘economy’ to represent the inevitable existence of competition for control over resources that comes along with the capitalist economic system in which we find ourselves (Couldry and Mejias, 2019; Zuboff, 2019).
We also acknowledge the importance of clarifying how ‘ethics’ is approached in our paper. We mobilize ethics in two distinct senses. In the first sense, ethics refers to the principles, values and frameworks that are used in the AI industry to guide the development of AI technologies in ways that are deemed by stakeholders allied to the industry to be morally good (Ananny, 2016). A prime example of such principles, values and frameworks is the growing attention to fairness, accountability and transparency in ML as a strategy to enhance the ethical status of AI technologies. This sense of ethics aligns with the study of ‘practical ethics’ or the actions and decisions that people perceive to constitute ethical practice in their everyday work related to AI (Ananny, 2016; Metcalf and Moss, 2019).
However, we also employ the concept of ethics in a second sense. This second sense refers to the normative commitments associated with CDS and CDHS, which are fundamentally oriented towards advancing social justice. For this reason, these fields of work emphasize the cluster of concepts we have outlined (socio-materiality, surveillance and political economy) in normatively motivated analyses that contribute to the achievement of a more just world for all. Such an approach to ethics is explicitly focused on the operations of power and the redistribution of goods to those in positions of relative disadvantage. This second sense of ethics motivates the critical analysis we bring to co-design in this paper and the practical strategies we outline in our concluding sections.
Finally, in addition to the normative and theoretical positions outlined here, we also intend to clarify our view on the concept of co-design. It is important to note that co-design has been represented in the research literature in a variety of ways (Sanders and Stappers, 2008). For example, it may refer to any form of involvement in design or be used to describe a particular form of involvement distinct from related approaches such as participatory design. Involvement may occur throughout the design process, or only at particular stages. It may be employed as a strategy to improve usability and acceptance of technology or to elicit stakeholder values. These differences indicate a diverse field of research and practice, where scholarly communities are concerned with similar topics, but enjoy only partial overlap of assumptions and motivations upon which they are based. Nonetheless, we believe there is a clear conceptual benefit to critically examining the field of research and practice as a whole. This paper, therefore, uses ‘co-design’ as an umbrella term for approaches that involve end-users, patients or publics in any stage of the design process.
Involvement in design
Within design scholarship, formal involvement in design is most commonly attributed to Scandinavian approaches in the late 1970s and early 1980s which attempted to address workplace transformations brought about by computers. Inspired by action research, these early examples involved very little ‘design’ per se, but rather emphasized the importance of providing workers and union officials with the requisite knowledge and skills to understand the potential impacts of computer systems on their work, with the ultimate aim of strengthening collective bargaining strategies (Vines et al., 2013). This is perhaps best exemplified by the collective resource approach, which convened ‘independent study groups’ comprised of union members and academic researchers (Kraft and Bansler, 1994). These earliest forms of co-design were explicitly politically engaged, emphasizing productive tension over immediate consensus in arriving at a decision (Björgvinsson et al., 2012b). Worker control and agency were explicit aims (Vines et al., 2013), and most Scandinavian-inspired co-design today is characterized by two core assumptions: that those affected by a decision should have a say in its making, and that stakeholders’ tacit knowledge is essential to the success of a design project (Björgvinsson et al., 2012b).
Today, iterations of co-design methods and principles are reflected in many different but related approaches. User-centred design, for example, is an approach to design that focuses on eliciting users’ ‘real needs’ to improve the ‘fit’ between a user and a technology (Norman and Draper, 1986). User experience design focuses on a user’s expected emotions and attitudes when engaging with designed artefacts (Cooper et al., 2014). Human-centred design similarly emphasizes the incorporation of a ‘human perspective’ in all phases of the design process (Giacomin, 2014). While not exhaustive, these represent more established approaches, where PPI is typically mobilized in support of improving the acceptance or usability of technology.
More recent work has significantly expanded the topics and questions with which co-design engages, with contributions spanning sociology, anthropology, design studies, human–computer interaction (HCI) and computer-supportive cooperative work, among others. While a full review is beyond the scope of this paper, contributions most relevant to our argument in this paper tend to fall into one of the following categories.
First are contributions that explicitly attend to power and the social–cultural–political contexts that give rise to and shape co-design. For example, design justice (Costanza-Chock, 2020) is an intersectional approach to design that engages with how designed artefacts impact upon dominant and oppressed groups in society, emphasizing mechanisms for community accountability and control. Similarly, Escobar, in Designs for the Pluriverse (2018), argues for the decolonization of design through collaborative practices that are place-based, resist dependence on markets, and are more accountable to the needs of communities. Mainsah and Morrison (2014) and Harrington et al. (2019) apply a post-colonial lens to design, with Harrington et al. (2019) identifying considerations for more equitable co-design with marginalized groups, such as emphasizing attention to historical context, community access, and unintentional harms of design.
The second relevant category of contributions focuses specifically on advancing new conceptual or methodological approaches to co-design. These include the related domains of futures design, design fiction and speculative design, which contain core participatory elements (Forlano and Mathew, 2014; Harrington and Dillahunt, 2021; Lupton, 2017b; Ollenburg, 2019; Tran O’Leary et al., 2019; Tsekleves et al., 2017; Zaidi, 2019). Approaches inspired by actor-network theory are also highly relevant (Latour, 2005), where co-design is conceptualized as a site for ‘infrastructuring’ or forming publics around ‘matters of concern’ (Andersen et al., 2015; Björgvinsson et al., 2012a; Dantec and DiSalvo, 2013; DiSalvo, 2012; Hillgren, 2013; Pedersen, 2020; Rossitto, 2021; Storni, 2015). Finally, approaches drawing on the related field of values in design (VID) to leverage co-design as a strategy for discovery, analysis and integration of values in technology design, also provide important context for our work (Flanagan et al., 2005; Halloran et al., 2009).
The third relevant category is a small but growing body of literature specific to the design of data-intensive and emerging digital technologies such as AI/ML. Sloane et al. (2020) for example distinguish participation as work (e.g. in generating data), participation as consultation (e.g. in providing feedback) and participation as justice (e.g. longer-term partnerships, collaboration and capacity building) in AI/ML projects, outlining various conceptualizations of what participation might entail in co-design. Bødker and Kyng (2018) critique what they see as the dominant view of participation as a goal in itself, and outline the ‘big issues’ of participatory design which have been highlighted by advanced digital technologies. Shifting attention to the data that are so crucial for AI/ML technologies, Seidelin et al. (2020) focus specifically on how data might be better represented through co-design activities.
This third category of contributions emphasizes practical and conceptual challenges in the co-design of AI/ML and other digital technologies specifically and is most closely linked to the issues we address in our paper. We build especially on insights from this latter domain of work that are directly linked to health and health care. One salient example is the observation that even health-specific AI/ML technologies, and the resources and infrastructures upon which they rely, are typically generated outside of traditional health and medical settings and rely on logics not solely associated with the maintenance or improvement of health (Bot et al., 2019; Sharon, 2018). Other insights we glean from this body of work are more general, but take on a unique meaning in health-related contexts. For example, that patients or publics ‘participate’ in the design of algorithmic systems in ways that are unwitting or involuntary (Vines et al., 2013), such as in producing data upon which AI/ML algorithms are trained (Sloane et al., 2020); that AI/ML technologies can be modified or re-purposed after deployment to accomplish new health and non-health-related goals (Kitchin, 2017); the ‘black box’ nature of AI/ML algorithms (Pasquale, 2015), which limits what can be known and addressed through co-design and has particular salience in understanding decisions about medical care; and the challenge of accounting for how data and insights associated with AI/ML technologies will be used in future health and non-health-related applications (Ruckenstein and Schüll, 2017). These challenges inform our co-design risks and pitfalls presented in ‘Involvement in health care’ and ‘Pitfalls associated with co-design and ethical AI/ML for health’, respectively.
Involvement in health care
PPI in health care has an equally long and complex history, however, formal involvement arrangements can be traced to social movements initiated by feminist, queer and disability rights activists in the 1970s and 1980s (Brown and Zavestoski, 2004; Busfield, 2017). These movements rebuked medical paternalism and sought to legitimate experiential or embodied knowledge in bringing about changes to the institutions of medicine. In 1974, the United Kingdom's National Health Services established Community Health Councils as the first example of institutionally supported PPI, with a mandate to improve local service delivery and accountability (Hogg, 2007). While formal PPI has since taken on different profiles around the world, it continues to hold interest in many areas of health research and practice, including health professions education (Rowland et al., 2018); health care research (Greenhalgh et al., 2019); health policy (Abelson et al., 2004); and quality improvement and innovation (Donetto et al., 2015).
PPI is also closely linked to other influential ideas about how health care should be organized, and to whom health care decision-makers should be accountable. PFCC has been defined as: ‘The experience (to the extent the informed, individual patient desires it) of transparency, individualization, recognition, respect, dignity, and choice in all matters, without exception, related to one's person, circumstances, and relationships in health care’ (Berwick, 2009, p. 560). The basic ideas underpinning PFCC, however, are much older. Hippocrates urged physicians to ‘investigate the entire patient’ (Boivin, 2012). At the turn of the 20th century, Canadian physician William Osler is noted for orienting medical education towards the needs of the patient rather than the disease.
As with co-design, conceptualizations of PPI and PFCC vary considerably. Conceptual discussions of PPI have for example distinguished between democratic and consumerist rationales (Wait and Nolte, 2006); direct/indirect and proactive/reactive forms of involvement (Tritter, 2009); outcome-oriented versus process-oriented involvement (Ives et al., 2013); and domains of involvement, such as direct care, organization or policy (Carman et al., 2013). Others view PPI as existing on a continuum (Gibson et al., 2012), or as an ongoing process of organizing, where patient roles and identities are constantly being formed and negotiated (Rowland and Kumagai, 2018).
Notwithstanding these practical and conceptual challenges, interest in PPI has increased, and as information technologies have matured and become more deeply embedded in health care, strategies and perspectives from design and related fields have also increased in prominence. The fields of health and biomedical informatics (HI), for example, increasingly engage with methods and theoretical perspectives from HCI, despite the paradigmatic differences that have historically made collaboration difficult. While HI and HCI share an interest in the variety of ways people engage with technologies in diverse use-contexts, they often do so via different methods (e.g. experimental research designs vs. design-based methods); publication venues (e.g. peer-reviewed journals vs. conferences); and topics (e.g. clinical settings vs. consumer applications) (Kim, 2019). Some of these divides are narrowing, however, as health services researchers seek new approaches capable of addressing complex design, implementation and evaluation challenges posed by advanced digital technologies (Pham et al., 2016; Shaw et al., 2018).
Today, the focus of health-related technology design is shifting once again, as information systems and the goals they are intended to accomplish, continue to evolve. Some, for example, propose that HCI and related fields find themselves in a new wave concerned primarily with persuasion (Fogg et al., 2007). AI/ML applications in health are broad, but in many instances ‘nudge’ attitudes or behaviours either through direct intervention or by providing tailored information (Yeung, 2017). At the individual/patient level, for example, research in digital behaviour change incorporates methods and perspectives from design and psychology to accomplish self-management of medical conditions, or health promotion via behaviour modification (Michie et al., 2017). AI/ML has also been used in epidemiological modelling and forecasting (Lalmuanawma et al., 2020), clinical decision support (Montani and Striani, 2019), and health care operations and logistics (Obermeyer et al., 2019). Involvement of patients or publics in the design of advanced digital technologies often emphasizes the inherent patient-centred or empowering qualities of co-design approaches (Capecci et al., 2018; Enshaeifar et al., 2018; Triberti and Barello, 2016) or AI/ML technologies (Topol, 2019), especially when directed to health-related goals. As such, co-design, PPI, and PFCC afford legitimacy to AI/ML technologies for health, though the extent to which they always should, remains a topic of debate.
We see the affordances of AI/ML technologies for health and the challenges they pose to co-design presenting three main risks that give rise to the pitfalls presented in the following section. First, co-design risks adding new harms to health systems as a result of putting forward innovations that have not been designed with unintended consequences in mind. These include the ways in which AI/ML technologies can be instantly adapted or modified to suit new goals, for which patients and publics have no input once the technology has been deployed. Second, co-design risks instrumentalizing patients, using their involvement in the design of an AI/ML technology to make advances towards achieving pre-existing goals established by those in positions of power. In AI/ML for health, this power is increasingly distributed among a diverse range of private actors. Third, co-design that is explicitly focused on the design of technologies risks obfuscating societal injustices when the involvement of patients or publics focuses only on those problems which can be solved by technologies.
We now shift to a description of three main pitfalls associated with co-design for ethical AI/ML for health. We suggest that attention to these pitfalls is essential to determining the appropriateness and feasibility of AI/ML co-design for health and that by addressing them, it may be possible to advance approaches to co-design that better equip it to engage with the normative issues raised by those technologies.
Pitfalls associated with co-design and ethical AI/ML for health
Pitfall #1: The tendency to place disproportionate emphasis on procedures and qualities of involvement
The central point advanced with Pitfall #1 is that ‘better’ involvement strategies (which even in more critical approaches is often indicated by breadth, depth, or impact of involvement on decision-making) do not imply a stronger focus on the entirety of a sociotechnical system, much of which is out of view for both users and designers of AI/ML technologies. Attending to this broader sociotechnical system is especially important when considering outcomes related to AI/ML technologies for health, where novel forms of health surveillance, combined with the increasing value of health-related data, introduce ethically salient issues.
Scholarship and practice related to co-design and PPI tend to emphasize the procedures and qualities of participation or involvement, for example attending to the importance of processual and contextual characteristics of involvement, ‘moments’ or ‘stages’ of involvement, patient and public latitude in decision-making, organizational support for involvement, and the proximate impacts of those characteristics on designed artefacts (Abelson et al., 2010; Frauenberger et al., 2015; Kensing and Blomberg, 1998). The implicit assumption advanced by these viewpoints is that better involvement strategies will result in better design outcomes, as evaluated by the impacts of those strategies on design products. This perspective is complicated by normative and epistemic challenges related to AI/ML for health that result in an overly narrow view of ethically salient issues for health-related co-design, of which we outline just three below. We contend that processes of co-design that encourage a narrower emphasis on practices of involvement risk losing sight of the broader sociotechnical system, and the crucial normative issues embedded in those systems that surround the co-design process.
First, AI/ML technologies are capable of analysing increasingly large volumes of data, introducing forms of surveillance not previously possible. AI/ML technologies by definition discriminate between ‘measurable types’ or classifications of meaning based on available data (Cheney-Lippold, 2017), where classifications are largely invisible to those they are applied to, and determined by those with the power to know their significance. The diversity of actors and interests in digital health mean that classifications implicate ‘health’ in many different ways, however. For example, Sharon (2018), drawing on Boltanski and Thévenot (1999), identifies five different orders of worth animating conceptualizations of the common good in health-related research led by large technology companies: ‘civic’ (doing good for society), ‘market’ (enhancing wealth creation), ‘industrial’ (increasing efficiency), ‘project’ (innovation and experimentation), and ‘vitalist’ (proliferating life). Their work illustrates the importance of considering the presence and strength of influence of some orders of worth over others in different health-related co-design settings.
Second, and related, AI/ML technologies for health (and the data upon which they rely) are of value to actors increasingly distal to formal health and health care systems. While the everyday consequences of mostly invisible ‘measurable types’ are often appreciated in terms of targeted advertising, search recommendations, or dynamic pricing strategies, they may also form the basis for insurance coverage and premium decisions. Credit rating companies, too, offer medical adherence risk-scoring products which allow payers and providers to identify patients who may be at higher risk for ‘non-compliance’ with medical treatments (Hogle, 2016). Fourcade and Healy (2017) have described these developments in terms of an expanding ‘economy of moral judgement’, where health outcomes are experienced as morally deserved, based on prior ‘good’ or ‘bad’ health behaviours.
Third, these logics can have the effect of responsibilizing health care, pushing monitoring and management further into the domain of individual patients and caregivers (Rich et al., 2019), often through design decisions that nudge health-related behaviours. While frequently lauded for their potential to more effectively engage patients in their own care, these perspectives have been critiqued for oversimplifying the meaning and value of engagement in digital health (Burr and Morley, 2020). Some, for example, point to the unrecognized ‘repair work’ that often accompanies the use of digital health technologies (Forlano, 2020; Schwennesen, 2019). Moreover, as Prainsack (2020) notes: ‘The very instrument of nudging contains value judgements: It assumes that addressing the practices of people directly is better than changing structural factors. It has been shown, however, that a focus on individual practices directs attention and resources away from tackling the more structural, systemic characteristics that shape the problem in the first place’ (p. 11).
These evolving geographies of responsibility (Schwennesen, 2019), asymmetries of knowledge and logics of efficiency would suggest that any claims to the ethical standing of co-design should be evaluated against a much broader set of sociotechnical relations. This would require that co-design not only attend to how AI/ML technologies contribute to ‘medicalization’ or ‘commodification’ as discrete outcomes of individual technologies, but also the ways in which those technologies, once embedded in health and health care systems, transform broader social, political, and economic fields.
Pitfall #2: The tendency to focus attention primarily on the agency of patients and publics in co-design
The central point advanced with Pitfall #2 is that ‘better’ involvement does not mean that people are entirely free from agential constraints that inevitably shape their participation in design activities. These constraints not only apply to patients and publics, but others implicated in design processes too. Health, like other sectors, presents its own unique constraints which continue to evolve, and need to be more explicitly accounted for when considering the ethical salience of co-design.
Implicit in any undertaking of co-design is the belief that the approach is inherently more ethical than other design strategies not involving patients and publics. In contrast, critical scholarship has focused on ‘levelling the playing field’ in co-design processes, by articulating strategies for shared language in design (Burrows et al., 2016), or studying how co-design methods might ‘distort’ participation in favour of designers’ interests (Compagna and Kohlbacher, 2014). While these theoretical and practical developments are crucial for enhancing the agency of patients and publics to participate more effectively in design, what is not explicitly acknowledged in many of these perspectives is how limitations imposed on designers also influence design outcomes. This especially bears relevance in co-design, where designers are conventionally expected to move from a position of expert to ‘facilitator’ (Björgvinsson et al., 2012a; Farrington, 2016; Sanders and Stappers, 2008), ‘stager of negotiations’ (Pedersen, 2020), ‘agonistic Prometheus’ (Storni, 2015), or ‘creator of a third space’ where knowledge exchange can occur (Muller, 2009).
The practices, goals and perspectives of designers are diverse, however, and influenced by a broad range of interests and values. These include other project stakeholders, professional norms, workplace culture, financial incentives, shareholders and broader economic trends. In health, these also crucially include the social and professional norms associated with biomedicine, and the epistemic privilege of evidence-based medicine (Chin-Yee and Upshur, 2019; Schwennesen, 2019), where ‘evidence-based’ is conventionally linked to the epistemic criteria of truth, validity, and foundationalism, and therefore especially quantitative evidence (Upshur, 2001) – of which AI/ML is expected to be transformative.
Similarly, AI/ML systems are not static objects, but contingent and dynamic. For example, in a study of a physical rehabilitation algorithm intended to reduce in-person clinic visits, Schwennesen (2019) notes that crucially important parameters used to assess the bodily movements of patients were not only determined by patients and physiotherapists, but also the capabilities of the algorithmic system itself. A physiotherapist on the project notes: ‘We had to sit down and be pretty tough in setting priorities… If there were some parameters that dealt with what one does with the arms or something else, then of course, we could say, “The sensors can’t say anything about that”. So of course, that was automatically discarded’ (p. 181).
Acknowledging limits on designers’ agency underscores the importance of also attending to the agential capacities of those leading design and development processes. By focusing attention only on enabling or empowering patients and publics in isolated design events, strategies to improve the processes and outcomes of co-design risk are ineffective by failing to attend to the broader range of influences on the activities that take place during the design process. While some scholars have acknowledged these limitations on designers’ agency (Bødker and Kyng, 2018; Hepworth, 2019; Vines et al., 2013), this consideration has yet to take a more central role in co-design discourse, likely as a result of a historical focus on empowering end-users of technologies.
Avoiding the pitfall of attending only to the agency of patients and publics at the expense of the agency of designers requires engagement with this broader ecosystem of design, expanding the view of who and what is considered relevant. Attending to this expanded ecosystem may illuminate strategies for co-design that go beyond the proximate issue of patient or public agency in artefact design, to consideration of the institutional arrangements, technical artefacts, infrastructures, norms and social goals that have made the particular design event possible in the first place.
Pitfall #3: The tendency to neglect the broader contexts of representation & inclusion
The central point advanced with Pitfall #3 is that the inclusion of communities in design processes does not necessarily address problems that lead to marginalization in the first place. Indeed, it rarely does, and instead risks supplanting consideration of the causes of marginalization with easy-to-use technological solutions that may actually exacerbate health inequities.
Representation and inclusion of communities or individuals presumed to be affected by AI/ML are often positioned as a strategy to reduce potential harms associated with designer bias, ignorance or neglect; the more accurately co-design processes represent the perspectives of particular individuals or groups in society, the more technologies will reflect their interests. However, this view obscures two core challenges posed by AI/ML technologies to involvement and representation. While this is true of other sectors, we discuss these especially as they relate to health.
First, not all groups benefit equally from AI/ML technologies, even where representation and inclusion are mobilized as a strategy to improve access or reduce bias. Just as an emphasis on the agential capacities of users risks ignoring limitations placed on designers, so too does an emphasis on inclusion risk ignoring the systemic nature of injustice (Bell and Hartmann, 2007; Hoffmann, 2019). Making claims to ethical co-design demands designers engage with the social determinants of health – or the social, political, and economic bases of individual and collective health and well-being. While some scholars have made important advances in attending to intersectionality (Bauer and Lizotte, 2021; Lizotte et al., 2020) and the social determinants of health (Kreatsoulas and Subramanian, 2018; Pierson et al., 2021) in AI/ML models, there remains a risk of failing to account for the broader sociotechnical context that affects their ethical import in the first place. For example, in their study of technological futures with youth participants in a Chicago summer design program, Harrington and Dillahunt (2021) report that the primary challenges students described were racism, police brutality, segregation, poverty and unfair housing policies. Technological solutions solely targeting the proximate issue of a health care outcome will therefore always be partial, where co-design of health-related AI/ML risks perpetuating institutional injustices.
Second, the aim of representation in design is never completeness or objectivity, but practical usefulness (Asaro, 2000). The ways in which representation and inclusion are operationalized in design processes – typically in the form of ‘average’ users or community members – raises questions about exactly what is practically useful and to whom. Where representation and inclusion obfuscate fundamental questions relating to power and privilege, there is a risk of entrenching the same problematic relations that technologies are intended to resolve. These biases take on new forms when produced algorithmically. Will users have the ability to contest categorizations such as ‘healthy’ or ‘not healthy’, ‘compliant’ or ‘non-compliant’? Will they even be aware of them?
To avoid the tendency to neglect broader contexts of representation and inclusion, co-design can include provisions for reflecting on why particular individuals or groups are being pursued, what they are expected to stand in for, which upstream causes of health-related ‘problems’ might exist, and how co-design and AI/ML can or cannot mitigate those consequences. Where representation of patient and public interests is at stake, co-design strategies can better account for the plurality of values that underpin interest in, and expressions of, representation.
Avoiding the pitfalls: opportunities for critical research and practice
This review has elucidated some of the challenges posed by AI/ML technologies to the patient and public co-design of those technologies. In some cases, AI/ML for health has amplified existing challenges, such as questions of representation and purpose. In others, AI/ML technologies have presented new challenges, such as the capability of co-design to address questions relating to data extraction and the future uses of those technologies. These risks and obstacles apply not only to PPI in the design of individual technologies, but also to health services and systems, and society more broadly. We suggest that many of the methods and perspectives necessary to address these challenges already exist, but would benefit from being brought into conversation with each other more fully.
We have also argued in this paper that co-design, especially in health care, operates as an ambiguous and diverse concept that variously includes different ideas: involvement, participation, representation, empowerment, patient-centredness, democracy, ethics and so on.
To offer some conceptual clarity in response to these issues, we outline three areas we consider most salient to advancing the goal of co-design for ethical AI/ML for health. While these suggestions arise from an analysis centred around discussions of AI/ML for health, they may also serve as a call to other domains where norms and other incentives tend to privilege PPI in design.
Clarifying co-design’s commitment to values
In this paper, we have argued that accomplishing ethical AI/ML for health requires more explicit engagement with the broader social, political and economic fields that give rise to both co-design and AI/ML, and relatedly, more explicit engagement with the values they hope to advance. While some may argue that all co-design involves the illumination of values (through the involvement of diverse publics), or that co-design itself necessarily advances particular values (such as democratic values), we suggest that co-design nevertheless would benefit from more explicit engagement with its normative foundations.
First, with respect to the claim that all co-design involves the illumination of values (through the involvement of patients or publics), we echo the cautions of others who raise the related practical and conceptual challenges of (1) identifying relevant direct and indirect stakeholders and (2) ensuring that the elicitation of values through the participation of those stakeholders does not run the risk of committing the naturalistic fallacy (i.e. conflating descriptions of individual values preferences with normatively desirable endpoints) (Manders-Huits, 2011). Indeed, some of the critiques associated with VID (Donia and Shaw, 2021) may also be productively applied to co-design practice.
Second, with respect to the claim that co-design itself explicitly advances particular values, we suggest that those employing co-design attend to exactly which values co-design advances. For example, the earliest forms of co-design could be said to be broadly committed to the values of ‘workplace democracy’, ‘autonomy’ or ‘quality of work life’ (Iversen et al., 2010). However, those values commitments arose in the context of an expanding science of organization management, and also advanced interests related to work quality, productivity and innovation (Kelty, 2020). In AI/ML for health, we might ask which values are implicitly carried forward with a commitment to co-design, and how those interact with different views on what the ethical status of co-design and AI/ML should be. Referring back to the importance of more strongly linking the sociotechnical context of design with the participation of relevant stakeholders, we suggest that co-design would benefit from more explicitly attending to the circumstances that have made it so, and using those insights as a basis for situating the normatively desirable futures that arise from co-design practice.
Re-conceptualizing representation in light of algorithmic assemblages
We have also argued in this paper that where co-design is mobilized as a strategy for representation (and it often is) it is important for designers and others to recognize that co-design should not necessarily claim epistemic legitimacy or moral authority solely based on the composition of its patient or public participants, and any claims to the representation of patient or public interests in co-design should be scrutinised in light of the technology's broader societal impacts. This is especially important with respect to AI/ML technologies for health, which entail representational forms themselves linked to both health and non-health-related goals.
While this paradox is not easily reconcilable, we suggest that this is a challenge with which co-design scholars and practitioners can more fully engage in future work: that representation of public interests is often a key rationale for co-design, but that AI/ML technologies themselves produce representations that are partial, opaque and temporary. Co-design is in a unique position to forge new ways of conceptualizing representation in the design of AI/ML systems. For example, which forms of representation are inherent to AI/ML (e.g. statistical), and which does co-design attempt to advance (e.g. political or democratic)? When and how might these be in conflict, and which trade-offs do they involve?
Chasalow and Levy (2021) argue that like co-design, ‘representation’ and ‘inclusion’ are ‘suitcase’ words that can carry many different meanings which are not merely semantic, but normative (e.g. political legitimacy) and epistemic (e.g. tacit or inclusive knowledge). As such, we suggest that the co-design community engaged with AI/ML can be more precise when employing them, and explicitly recognize the broader range of values that underpin these concepts in their different forms.
Mapping sociotechnical relations
Part of committing to values in co-design, including those associated with representation and inclusion, involves surfacing the actors and institutions upon which they rely. Doing so is crucial not only to accountability, but relatedly, to illuminating the ability of designers and others involved in co-design to realize any positive vision for their work. Here we suggest that co-design may benefit from further engagement with ‘theory-methods packages’ capable of explicating those relations and deriving strategies for intervening on the sociotechnical system in which a technology and design process is embedded.
Methodological approaches in the social sciences and humanities, for example, may help equip co-design with valuable approaches to account for this complexity. Institutional ethnography (Smith, 2005) has been taken up in sociological studies of health for its explicit focus on identifying the materialized social relations that coordinate people's everyday activities – whether patients or designers (Webster, 2020). Other methods already used in the design, such as stakeholder maps and prototypes, may also be useful when they focus on the broader sociotechnical system of which design and AI/ML are a part. When combined with an explicit and reflexive commitment to values, these may better equip co-design to understand how different contexts affect the agency of designers and other stakeholders to actually realize the futures being envisioned.
Conclusion: recognizing the limits of co-design
Our summary reflection on the content we have provided here is a call for design humility (i.e. consistently attending to what professional design cannot do for a problem). While humility in science and technology has been proposed by other commentators (Jasanoff, 2007; Selbst et al., 2019), this same consideration is at risk of being overlooked in co-design as a result of assuming that by letting patients or publics inform development, co-design is itself an expression of humility. As Irani (2018) notes, all design entails privileged sites and conceptual frames deserving of scrutiny, and as such, any co-design humility might ask: when does co-design substitute other expressions of public interest and action? What are the epistemic limits of design research as it is conventionally practised? Who is sidelined by professional design and why? And perhaps most importantly, when should we not design?
Moreover, co-design discourse itself is primarily rooted in 20th and 21st-century Euro-North American thought. Ansari (2019) for example, asks: ‘What does it mean to design for people who are not like us, even before we ask whether we should design for people who are not like us? What does it mean to design for people who have different histories, different backgrounds, and different commitments from us? What does it mean to design for people who might relate to the world differently from the way we do?’ (p. 3). Attending to these questions ought to be the starting point for any designer enacting judgement of the ability of co-design to achieve ethical AI/ML for health.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The authors received no financial support for the research, authorship and/or publication of this article.
