Abstract
“Vulnerability” is one of the terms recently used to discuss ethical aspects of artificial intelligence (AI). Current discussions on AI vulnerability tend to individualize vulnerability, largely neglecting its political dimensions, which are rooted in systems of inequality and disadvantage. This article draws on data from a multiple-perspective qualitative interview study to explore how notions of vulnerability underpin the development and implementation of AI. Results uncover how AI designers use narratives around missing data on vulnerable populations as justifications for the creation of synthetic data that were artificially manufactured rather than generated by real-world events. Although this was a profitable business model for AI companies, these practices ultimately situated long-term care residents as voiceless in the development of AI. This contribution shows how vulnerability is situated in a political economy of AI, which understands the absence of data on vulnerable groups as a possibility of profit creation rather than a chance of fostering inclusion.
In the last years, algorithmic decision-making systems and (large) data infrastructures, often referred to as artificial intelligence (AI), have been increasingly developed for populations that are perceived as vulnerable. One example of this is the development of AI as gerontechnology, that is, technology to support older adults in their needs and vulnerabilities that arise in the process of aging (Chen 2020). The development of AI as gerontechnology is particularly driven by an overarching idea that it will, through the automated collection and analysis of (big) data, support health and care professionals in clinical decision making, remote monitoring, or predictive nursing while at the same time enabling older adults to live and age more autonomously. In a nutshell, these developments are fueled by the hope that AI will provide “better care for older people” (Chen 2020) and support health and care professionals in dealing with the diverse vulnerabilities that accompany later life.
In recent years, it has been noted that such development of AI as gerontechnology also comes with ethical risks that are, most importantly, connected with developing technology for a user group that is perceived as vulnerable. Ample literature has highlighted that when AI is applied in the context of aging, particular ethical precautions need to be considered (Rubeis 2020). Others have warned about age bias in algorithmic decision making and ageism of big data in general (Chu et al. 2022) and argued that AI might act as an amplifier of existing (racial, gendered, intersectional) inequalities that characterize aging and later life (Carver and Mackinnon 2020; Stypinska 2022).
This connects the discourses about AI as gerontechnology with contemporary discussions around ethics and AI, where next to bias, transparency, and fairness, vulnerability has emerged as a key concept to discuss the ethical challenges of AI, machine learning, and big data (Krupiy 2020; Malgieri and Niklas 2020). Identifying vulnerable groups and analyzing how they are effected by AI has been identified as one strategy to promote fairness and transparency in AI (Wachter, Mittelstadt, and Russell 2021), and recent AI governance initiatives, such as the AI Act of the European Union, specifically address and forbid AI practices that exploit “any of the vulnerabilities of a person . . . with the objective, or the effect, of materially distorting the behavior of that person . . . in a manner that causes . . . significant harm” (European Parliament 2024). “Vulnerability” hence increasingly becomes a term used within the AI practitioner community that needs to be considered when AI systems are being developed, tested, and implemented.
However, it has also been noted that discourses around vulnerability in AI currently lack a common understanding of what constitutes vulnerability, how is it is explained and perceived, and which groups are to be understood as vulnerable. Vulnerability definitions in AI research are often fragmented (Rodrigues 2020), and current discourses tend to largely individualize vulnerability (McKeown, Bui, and Glenn 2022) because most commonly, vulnerability is treated as a characteristic of a person or group that requires special consideration by others rather than an outcome of social inequalities, discrimination, or power. Such accounts of individualized vulnerability, we argue, tend to largely neglect the political dimensions of vulnerability that are rooted in systems of durable inequality, structural discrimination, and political and economic disadvantage (Agostinho et al. 2019).
This article aims to broaden the discussion around concepts of vulnerability in the context of AI. To do so, it proposes a reconceptualization of vulnerability as a relational and more-than-human phenomenon that is necessarily shared, processual, and dynamic. Empirically, it explores notions and understandings of vulnerability that underpin practices of development and implementation of AI by drawing on a data gathered through multiple qualitative case studies that explored the development and use of AI in elder care.
AI and the Vulnerabilities of Later Life
The development of AI for an aging population started in the 1990s with monitoring and automated alarm systems that were being increasingly used for older adults aging in place (Chan et al. 1998). This introduction of AI as gerontechnology was fueled by earlier advancements in medicine, where AI was actively pursued as a technological support system for medical diagnosis, patient surveillance, and automated data interpretation (Coles 1977). Ever since then, AI systems tailored for elder care have diversified and are now used, for example, as decision support systems for medical diagnosis (Chen 2018), in automated analysis of patients’ data for early disease detection and preventive medicine (Chen 2018; Pilotto, Boi, and Petermans 2018; Rubeis 2020), robotics to enhance precision in surgery (Chen 2018), to provide care support as conversational agents (Sapci and Sapci 2019), or monitoring and surveillance systems for older adults living and aging in place (Liu et al. 2020; Manzeschke, Assadi, and Viehöver 2016; Mortenson, Sixsmith, and Beringer 2016; Sapci and Sapci 2019).
These examples highlight how early development of AI as gerontechnology has largely focused on the vulnerabilities that accompany later life—particularly those that are connected to biomedical aspects of aging—and sidelined other aspects of aging and later life, such as social or political engagement, social relations, or leisure. In the context of AI, old age tends to be largely “vulnerableized” (Hebblethwaite, Young, and Rubio 2021) because it tends to be seen as a life stage related to biological decline and frailty that requires (technological) assistance and care. So far, older adults have mainly been addressed as a population “whose heterogeneities are systematically disregarded and whose potential vulnerabilities are overused” (Rießenberger and Fischer 2023:190).
In the last years, the discussion on AI in gerontology increasingly shifted as researchers highlighted ethical challenges that accompany the implementation and use of AI systems relating to the everyday experiences of older adults (Rubeis 2020), particularly those who are in need of care. For example, such approaches have discussed the role of age bias in algorithmic decision-making processes (Chu et al. 2022). A central narrative of this body of literature concerns initiatives to make AI better, either through motivating designers of AI to collect more varied or context-sensitive data to train AI models or through involving older adults in the development of AI through participatory or user-driven design approaches (Chu et al. 2022). Consequently, gerontological research adapted concepts of responsibility (Lukkien et al. 2021) or explainability (Khodabandehloo, Riboni, and Alimohammadi 2021) of AI and argued for the need to engage with value-centered and participatory design (Chu et al. 2022), educational initiatives, and ethical reflections when implementing AI in gerontological fields of practice (Rubeis 2020). Although these more critical engagements with AI in the context of aging aim at establishing more responsible, fair, or transparent AI practices in the lives of older adults, they tend to preserve the image of older adults as inherently vulnerable, highlighting the fact that the AI community should put particular attention on older adults as a vulnerable user group of AI.
Recently, scholars at the intersection of age studies and science and technology studies (Peine et al. 2021) have started to reflect on this “vulnerableization” of old age in the context of technological innovation in more depth. Such approaches have, for example, highlighted the problematic nature of an aging and innovation discourse that largely positions, on the one hand, technologies as a universally “good” and capable problem solver and, on the other hand, older adults as universally frail and vulnerable. This positioning of older adults as a population that is “in need of help and worthy of help,” Neven and Peine (2017) argue, ultimately creates a moral high ground that positions technological innovations as “obviously the right thing to do” (Neven and Peine 2017). In such a cultural climate of aging and technology, older users of technology are reduced to bodies with biomedical needs (Greubel, Moors, and Peine 2021), and their vulnerability is used as a justification and maxim for technology development. Such a climate, we want to add, also tends to strip older adults of their agency and in turn attributes agency to the technologies promoted to assist older adults with their vulnerabilities.
Such critical engagements with vulnerability in the context of technological innovation highlight the need to rethink the vulnerability of later life not as a stable phenomenon but to engage more deeply with the practices of creating vulnerability when new technologies are imagined, designed, and implemented for older target groups. They also invite us to think more deeply about how (human) vulnerability is created through the relationships between humans and nonhumans, particularly technological artifacts. In the following, we hence offer two expansions to these discourses around vulnerability of later life in the context of AI: first, by proposing a relational and collective understanding of vulnerability in later life and second, by theorizing vulnerability in the context of AI as a more-than-human phenomenon.
Vulnerability Assemblages: Theorizing Vulnerability in the Context of AI
In the last decades, vulnerability has been a subject of multiple academic debates spreading across diverse disciplines such as medicine, ethics, and the social sciences, usually drawing on diverse perspectives and methods. In a most general sense, Hauskeller (2019) refers to vulnerability as the possibility of being wounded (stemming from the Latin “vulnus,” meaning wound), which connects vulnerability to risk, the human openness to harm, and eventually, the inevitability of death (Liedo Fernandez and Rueda 2021). Bozzaro, Boldt, and Schweda (2018) propose to differentiate two stances toward theorizing vulnerability. On the one hand is a restrictive stance that theorizes vulnerability as a characteristic that defines certain groups as vulnerable and differentiates them from those that are perceived as not vulnerable (e.g., Schröder-Butterfill and Marianti 2006). On the other is a broad and collective stance on vulnerability that situates vulnerability as an ontological category and feature of human existence and embodiment (e.g., Butler 2020; Fineman 2008).
Scholarly discourses around the vulnerability of older adults within gerontology and age studies have largely drawn on a particular stance on vulnerability that identifies older adults as a particularly vulnerable group that can be separated from others (particularly middle-age subjects). Such thinking about vulnerability also typically situates the emergence of vulnerability in later life in the biological changes that accompany later life. As such, the vulnerability of an older individual is understood as a heightened openness for chronical disease or functional limitations, which results from the exposure of an older individual to a potential threat (Schröder-Butterfill and Marianti 2006). Such approaches toward vulnerability in later life, however, have also been criticized for largely neglecting the heterogeneity of an older population, underestimating the agentic role older adults have in reacting to and negotiating diverse vulnerabilities in later life, and overlooking the political and structural circumstances that create the vulnerability of an older population (Gallistl et al. 2023).
Our consideration of vulnerability assemblages is a relational and more-than-human one that seeks to trouble, on the one hand, ageist assumptions about older adults as inherently in decline, risky, weak, or vulnerable that are dominant in discourses around technology development for older adults. On the other hand, we question humanist concepts of vulnerability that have situated vulnerability as an existentially human characteristic and sidelined more-than-human aspects of vulnerability. Most importantly, such an approach challenges ideas of vulnerability as an attribute or characteristic of an individual or group and instead focuses on practices of vulnerableizing, that is, the mechanisms and processes that practically render (human or nonhuman) entities vulnerable. Rather than exploring the vulnerability of older adults in the context of AI, we address the question of how older adults are being made vulnerable in the context of AI, drawing attention to the practices through which older adults are “vulnerableized” (Hebblethwaite et al. 2021) by the systematic, institutional, and cultural circumstances of AI development.
This understanding of vulnerability is, on the one hand, inspired by the work of Judith Butler, who highlights how vulnerability is not an (fixed) attribute of an individual or group but something that is established in relations. Vulnerability, Butler (2021:177) writes, “is contextual since it belongs to the organization of embodied and social relations” and hence “not a subjective disposition. Rather, it characterizes a relation to a field of objects, forces and passions that impinge on or affect us in some way” (Butler 2016:25). The question who is seen as vulnerable is hence not a biological or individual one, but vulnerability emerges as an outcome of processes of social marginalization that constructs (older) adults as more vulnerable and—ultimately—less grievable than the general population.
On the other hand, we consider vulnerability as a more-than-human phenomenon. We hence question vulnerability as an inherently human characteristic or trait and instead highlight the importance of shared vulnerability that emerges in the relationships between humans and technical systems that are developed to support, surveil, or act together with humans. Vulnerability is hence understood not only as a human characteristic but also as something that can be attributed to a (more-than-human) system of relations. “Vulnerability” as a term is also utilized within technological communities, for example, to describe weaknesses and security flaws in computing systems such as software and operating systems (see e.g., the definition by the European Union Agency for Cybersecurity, n.d.). Although the concept of technological vulnerability used in our research goes further than the definition in its strict technical sense, we nonetheless address and highlight the weaknesses and flaws of algorithmic systems in practice. Using “vulnerability assemblages” as a term thus (a) allows us to take up a concept that is used in different academic and technological discourses and (b) enables questioning how distinction between human and nonhuman vulnerabilities emerge in practice.
This is inspired by posthumanist scholars like DeFalco (2020), who have, in the last years, increasingly troubled the assumption of vulnerability as an inherently human characteristic. Following Barad (2003), DeFalco (2020) questions the “givenness of the differential categories of ‘human’ and “nonhuman’” and instead proposes to explore the practices through which the boundaries between human and nonhuman entities are (de)stabilized in caring arrangements. Other authors have put forward assemblage theory as a useful concept to study how vulnerability or frailty in later life emerge out of more-than-human relations. Cluley, Fox, and Zoe (2021) put forward the notion of a frailty assemblage to highlight how human and nonhuman materialities are entangled in creating the frailty attributed to an aging body. In more-than-human assemblages of frailty, materialities (technological or not) continuously “establish the on-going ‘becoming’ of the frail body” (Cluley et al. 2021:417). In similar lines, Endter et al. (2024: 91) propose the term “ageing assemblages” to highlight the “relational terrain between older subjects, objects, technologies and environments” that becomes “the central phenomena to study, and the central place for critique on how old age and later life are rendered marginal and vulnerable.”
Applied to the present case of the development of AI for institutional long-term care (LTC) settings, the combined chosen approach implies acknowledging that even though an ageng body might be vulnerable due to chronic illness, frailty, or multimorbidity, bodies—especially those of older adults—are also practically vulnerableized through the institutional circumstances that are created to manage their vulnerability. From such a perspective, we ask how and why entities (humans, nonhumans, technologies) are constructed as vulnerable (and others as not vulnerable) rather than questioning the consequences and nature of the vulnerability of humans. This might include AI systems as gerontechnologies that are created to serve, surveil, and assist older adults in their care needs, such as automated fall-detection systems, social robots, or health tracking devices. In line with an approach of doing vulnerability (Gallistl et al. 2022), we ask how the vulnerability of older adults is practically created through the development and introduction of AI and which notions and ideas of vulnerability underpin the work of designers and developers, care home managers, and older adults living with such technologies.
In the following empirical analysis, we explore such a vulnerability assemblage, which emerged in the development of an AI fall detection system for older adults living in LTC institutions.
Methods
AI for Fall Detection and Fall Prevention
Automatic fall detection is a quite common AI application in the context of aging (O’Connor 2022), in which an algorithm is used to detect falls automatically and alert a caregiver for assistance. Automated fall detection can draw on different technologies to locate and identify falls. Most commonly are wearable solutions, inertial sensors, or vision sensors (Lapierre et al. 2018). Even though technology readiness level is, on average, still low, recent tests on AI-based monitoring systems found that they help reduce the time older adults need to spend on the ground after a fall (Bayen et al. 2021). Generally, algorithms that are used in automated fall detection systems use data mining techniques, support vector machines, or algorithms based on fuzzy logics methods, although publications often do not give enough information to clearly categorize the algorithm that is used to detect falls (Lapierre et al. 2018)
Two common challenges of such systems have been documented: a high false-alarm rate when these systems are used and a lack of availability of training data. Particularly in the case of fall detection in later life, Bui and Alaei (2022) note there is not enough high-quality recorded data available for public access. The lack of real fall data for training machine learning and computer vision techniques has hence been described as the main obstacle for the development and implementation of such technologies (Bui and Alaei 2022; Khan and Hoey 2017). As a solution for this, recent advancements have been made around synthetic data creation, for example, through virtual reality (Zherdev et al. 2021).
In our empirical case, the AI system that was used in LTC served as a fall detection and prevention tool, with the core functionality of being able to identify instances where a person would fall or was in danger of falling. In both such cases, the AI system would identify this as an incident and notify a caregiver, who would then be required to take action. The underlying machine learning model was trained with both synthetically produced and real-life data and was based on deep learning that could identify items, people, and motions in the room. Data collection by the device was accomplished through a 3D sensor that was mounted in the upper corner of each resident’s room and would gather depth information—in the form of vectors—about the space. The data used in this case were sensor-gathered 3D depth data, which had a number of benefits over other types of data, according to the developers. Because 3D depth data were not impacted by light and would therefore also allow an operation of the system during nighttime, it was considered to be superior to other visual data, such as video data. Furthermore, the perceived benefit for the developers was that it would provide elderly care home residents with enhanced privacy protection because it made instantaneous human identification impossible. Yet once in actual use, the identification of both care home residents and care staff was rather easily feasible because distinguishable elements were apparent and the AI systems were confined only to certain rooms of residents. Body shapes, the arrangement of the room, and clear objects such as furniture made it quite easy for caregivers to identify the individuals that were recorded by these devices.
As mentioned, the AI system was promoted as both a fall detection and fall prevention tool. This means that the model could identify (predetermined) risk factors for falling and notify caregivers accordingly, ideally so that they could take preventive measures. The predetermined risk factors could be adjusted by care staff and care home administration, which included functions such as: raiseup: This is the earliest alert type. The system alerts as soon as the person raises in bed. Keep in mind that in this setting a restless person will trigger a lot of alerts. situp: This alert type is identical to sitting sideways on the bed. standup: Here our system alerts if the person gets out of bed. (Fall detection system manual)
Finally, a function of the AI system was to detect the absence of older residents from the observable area of the fall prevention and detection system and, again, notify caregivers in such instances. Here, care home administration could adjust the time frame of absence, meaning that for some residents, the system would only alert after 30 minutes, and for other residents, the system could already alert after only 5 minutes of absence. For all these functions, care staff were notified through their local phone system on a regular basis, whereas older residents had no means to interact directly with the sensor or the AI system.
Data Collection and Analysis
Empirical work for this study was set in Austria and explored an AI company that developed AI fall detection systems and LTC facilities that were using such systems to monitor older adults’ behavior. Fieldwork was conducted in multiple instances, with some data collected between May and October 2021 and a second round of data collection between July and October 2022. The process of gathering data was informed by a multiple-perspective qualitative interview design (Vogl et al. 2018), which sought to comprehend the various perspectives held by those involved and the relational dynamics between diverse actors. The database consisted of 18 qualitative interviews with four different user groups that were seen as relevant in this case: software designers who worked at the AI development company, care staff who worked at the LTC facilities and had been in contact with AI fall detection systems, older residents of LTC facilities that lived with AI fall detection systems, and two representatives of an interest group that advocated for the rights of people living in institutional settings (see Table 1). In the care facility, two researchers conducted participant observations for roughly 24 hours, including three days shifts and one night shift. In addition, a participatory workshop with an informatics doctoral candidate, who also did research on the AI fall detection system and its underlying deep learning models, allowed us to gain insight into the algorithm’s operation and the production and collection of data used to train the deep learning models. All of the interviews were verbatim transcribed in German. German was also used to take field notes.
Overview of the Collected Data.
Situation analysis, a grounded-theory-based qualitative analysis technique, was used to analyze the data (Clarke, Friese, and Washburn 2015). Finding pertinent actors and their connections in the circumstances encountered during data collection was the goal, in particular, regarding the relationships among and practices that influence the interactions between the AI fall detection software and the other human and nonhuman actors in LTC facilities. For the analysis of this article, we focused our analysis of different notions of vulnerability that gained relevance in the material gathered and reflected on how this vulnerability was unequally distributed between the actors involved in the analyzed actor network. After using MAXQDA2022 to openly code observation protocols and interviews, four researchers created situational maps of the coded material.
Ethics
Conducting research on vulnerability and research including participants who are perceived to be vulnerable groups come with several ethical challenges, which the project team carefully evaluated and acted on before that collection. Before data collection commenced in the field, an ethical assessment was conducted by the three principal investigators of the project. The project was also submitted to evaluation through the ethical evaluation committee of the Technical University of Vienna. Responsible research practices that were implemented after this initial ethical reflection included the use of information and consent sheets for the participants of the interviews in different levels of complexity. The management of the LTC facility and participants were briefed on the procedures for collecting data at the facility twice. This included explaining to them all the measures for responsible research, including the right to withdraw consent at any time during or after the interview and obtaining written informed consent. This ultimately led also to the decision by the principal investigators to exclude one interview involving an older resident from the data collection and the analysis because it had not been entirely obvious whether the participant clearly gave consent to participate in the study.
Results
The findings of the empirical work on the AI fall detection and prevention system uncover three dimensions of vulnerability assemblages.
First was how AI designers used narratives around missing data on vulnerable populations as justifications for the creation of synthetic data, which were artificially manufactured rather than generated by real-world events in the lives of LTC residents. Databases of synthetic data were a major source of value exploitation for AI development companies because synthetic data creation was economically more profitable than collecting data from subjects that were perceived as vulnerable.
Second, although this was a profitable business model for AI development companies, these practices of synthetic data creation ultimately situated LTC residents as voiceless in the development of AI, and their everyday practices were misrepresented in the systems that were created to be used by them.
Third, these practices of synthetic data creation also ultimately situated the AI system itself as vulnerable—because it was immensely prone to false alarms and open to errors, malfunction, and failure. These aspects of technical vulnerability, however, tended to be black-boxed in the analyzed vulnerability assemblage: Although the vulnerability of older adults was well visible, the vulnerability of the AI system remained largely unnoticed by the involved actors.
Collecting Data on Vulnerable Populations
First of all, results uncovered how designers thought of LTC settings and the older adults living in these settings as being characterized by a certain type of vulnerability that needed to be accounted for in the development and implementation of AI. Software designers routinely described in the interviews that the development of AI for these settings called for massive amounts of training data, which was seen as a challenge in the context of elder care: The issue is that you don’t have these large data sets that you actually need in order to get the right performance. And at the same time, this area is also very sensitive, because you’re essentially recording at home. And that means it’s naturally an incredibly sensitive area. . . . That makes it extremely difficult to generate large amounts of real data. (Technology Developer 4)
Additionally, for the system to work properly, there needed to be a significant amount of data depicting different types of falls available, which then could be classified by the AI system: “So mainly actually recognizing the event [the fall] itself. Um, that’s what the AI does. So that’s been trained before, with quite a lot of data, quite a lot of exemplary data of falls and yes, that’s the core function of AI. The classification of events, so to speak” (Technology Developer 1).
This revealed an ethical tension between something that was seen as desirable for software designers from a technical perspective—having vast amounts of data—and something that was seen as undesirable from a humanist perspective—older adults falling. Gathering visual data on older adults falling was hence seen as being challenging because LTC facilities were difficult to access, collecting data was seen as ethically challenging, and the events that were monitored would not happen often enough. As one software designer explains: “In in nursing homes, you would usually not get the data, obviously because of NDAs [nondisclosure agreements], right? So, it’s very difficult to publish comprehensive real data sets in this field because then people would have to agree basically. I mean it’s an intimate thing, right?” (Technology Developer 2).
For this reason, it was common in the interviews that software designers would look for ways to collect data on older adults’ falls without having to install sensors in the rooms of LTC residents. Several strategies were named to account for this. For example, they tried to gather data in hospitals rather than LTC settings because these were seen as being easier accessible. In addition, software designers turned to synthetical data creation, where visual data on older adults falling were not observed in real-life settings but synthetically created through automated software and data gathered from software designers using motion capture suits in which they tried to imitate different situations of falling down. For these practices of synthetic data creation, software designers would hence pose for the fall detection sensors and use their own bodily movements as reference points for the training data. One developer explains: “With such a simulation, you can include all the scenarios you want, and you can call up these scenarios whenever, and you don’t have to wait a year or days, until someone falls” (Technology Developer 3).
Obviously, this also led to some challenges in the development of fall detection AI. The synthetically created data were seen as being highly decontextualized because they were not observed in real-life settings but were synthetically created in data laboratories (mainly the offices of software designers). To account for such decontextualization, software designers later had to add diverse models and filters to the synthetically created data to complement the synthetically created data with the complexity of real-world situations. These models and filters were often referred to as “noise” that had to be added because otherwise, the synthetic data were seen as too clean to work once the AI fall detection system was implemented in LTC settings: “Of course, the problem is always to recreate a simulation that is real enough. You have to use different noise models [to account for] all the imperfections of reality, or the weaknesses of the camera in reality” (Technology Developer 3).
Why did designers still engage in these practices of synthetic data creation if they were perceived as being rather challenging? Two reasons for this emerged in the analyzed material. First, software designers saw a major advantage of these data because they did not have to engage with the contexts of LTC or the (vulnerable) older adults living there. Although all software designers acknowledged that a basic understanding of “the scene” was necessary to be able to develop AI for these settings and the engagement with LTC facilities was necessary to ultimately create a market for the developed system, it was enough to gather information about the scene from sales representatives or marketing staff who would visit LTC more regularly to communicate with possible costumers: “If you are just the AI developer, without having to do the data collection, but really [you] just have to represent the network. You don’t have to know anything about the whole [thing]” (Technology Developer 3).
Second, designers stuck to these rather challenging practices of synthetic data creation because the practices were seen as being economically more profitable than collecting data in real-world settings. This was because access to LTC facilities was seen as complicated and time-consuming (and hence costing a lot of money) and, simultaneously, data on older adults’ falling that was usable as AI training data were scarce. Having a vast database—even if mostly of synthetic data—was seen as a major economic resource for the AI development company. During the interviews, it was acknowledged that the development of the AI system under question was situated within a political economy of AI in which data were seen as capital that AI companies could profit from: I mean all the profit through AI is made with RGB [red, green, and blue primary color model] data, right? Because there you basically can scrape the internet and then you have data sets of millions of images. [There are a] couple players who really push AI forward [that] are usually like the big guns like Facebook, Google. I mean they are worth billions because they have this data. (Technology Developer 2)
The first aspect of the vulnerability assemblage that emerged in the analyzed case concerned data vulnerability—and hence the question of which groups get represented in real-world data, in which ways, and how vulnerability emerges as a consequence of not being represented (enough) in data. The vulnerability that emerged here was a vulnerability through misrepresentation—because the lives of older adults, their manifold experiences with falling, and the diversity of falling in later life were hardly represented in the databases that were used to train the AI system.
Experiencing AI Vulnerability
How did these practices of creating and collecting data on populations that were perceived as vulnerable influence the everyday experience of care staff and older LTC residents with the AI fall detection system? Two central aspects emerged in the analyzed interviews. First, our study highlights that these highly decontextualized forms of training data failed to account for the complexity of the real-life living situations of older adults in LTC. This resulted in high amounts of false alarms because routinely, the system would misunderstand diverse practices of older care home residents as falls. Care home residents hence actively tried to change their everyday practices to avoid these false alarms, which were perceived as both annoying and not transparent in regard to why they had happened. The designers also acknowledged this during the interviews: “I would say that our fall detection is already very good, but what happens from time to time is that [there is], for example, a rollator walker or wheelchair that our code has never seen. . . . And then there are false alarms” (Technology Developer 1).
For older care home residents, these false alarms had a major impact on their everyday lives because a quite a lot of different activities—not just falling but also sitting or lying down, doing sports exercises, picking up things from the floor—could sound an alarm by the AI fall detection system, as a resident illustrates: “I’m lying in bed and have the walking aid next to me and then they come in, ‘Has something happened?’ I say: ‘No, I’m asleep’, ‘Well, because the thing has gone off, hasn’t it? The sensor’” (Resident 11).
The system created a situation where certain daily practices of older adults were framed as potentially risky. For example, one LTC resident shared that she would align all her shoes close to the wall in her room because she could not see well anymore and needed to know where exactly her shoes were so she could still find them. However, when she tried to pick up her shoes from the floor, the system’s alarm would often trigger, meaning that care staff had to come and check up on her.
Second, our analysis revealed that these forms of decontextualized data also marginalized the position of older LTC residents in the practices of AI development because they had no way of engaging with the system and its practices of data collection and analysis. Because the data used as training data were not actively collected by the AI development business in the LTC facility, older residents had little to no information about who this company was, which data were being collected (for which purposes), and how (if at all) they could intervene in the implementation of the AI systems. In the assemblage of the AI system, older LTC residents were positioned as a vulnerable population that had no power in these socio-technical systems of AI development. This became, for example, visible through the sensor itself, which showed no indication when an alarm was being set off to the residents. LTC residents were often left wondering if an alarm was (ever) set off or not: “In the beginning you keep looking because it’s new and then you keep thinking, ‘Will it go off? Will it ever start?’ and it never does. But by now I don’t think that it will, because it never started, so it never will” (Resident 6).
This marginalization of older adults in the analyzed assemblage also became visible when older LTC residents described their system from their perspective. Often, they had only a vague idea about what this system was doing, if it was collecting data or not, or what could be done if more information on the system was wanted: “I never see anything. Maybe when I’m asleep it turns on or something. I don’t know [7 second pause]. It looks like a little black line. . . . No, I don’t know what that is. Did you ask a nurse?” (Resident 9).
Vulnerability Assemblage
What do these results teach us about vulnerability in the context of AI? First, our data highlight how vulnerability was clearly attributed to older LTC residents by software designers, care staff, and also the interviewed LTC residents themselves. During the interviews, it became clear that from these different perspectives, older residents was the group that was most obviously constructed as vulnerable because they were seen as being a group in need of constant and continuous care. Care staff routinely highlighted this in the interviews, particularly when talking about how older LTC residents were too frail and vulnerable to understand and make sense of the AI system: “First of all because they don’t know, they don’t question [the system]. I think just because of their cognitive state. And I think when they come to us, they don’t know that [the system] exists, so it doesn’t exist for them, right?” (Care Personnel 8).
The vulnerability of the AI system was, however, less visible for the involved actors in the assemblage. Even though the system was, indeed, vulnerable, given that it was subject to care by all interviewed actor groups, its vulnerability was hardly acknowledged during the interviews. Rather than vulnerability and care, actors discussed the systems through the logics of functionality, (technical) intervention, and performance. Our example highlights that the vulnerability we found in our empirical material was a shared vulnerability, created through the complex interplay between older bodies, their particular living and care arrangements, a technical system, and the logics of AI development situated in an economic system. This resulted in a practice of vulnerableizing older adults through several instances and mediated through the AI fall detection system even though both older adults and the AI system could be perceived as vulnerable. Yet the vulnerability of the AI system was less problematized or rendered visible.
Discussion
In the evolving landscapes of AI ethics, “vulnerability” has emerged as a key term to discuss the social and ethical relevance in AI in research and policy discourses. Special consideration of groups that are perceived as vulnerable in the development of AI was recently incorporated into various declarations and guidelines that regulate AI research, particularly in clinical settings (Malgieri and Niklas 2020). For example, the EU AI Act highlights that AI systems need to pay “particular attention to situations involving vulnerable groups” and that because of this, “vulnerable persons should receive greater attention and be included in the development and deployment and use of AI systems” (European Parliament 2024). Such approaches, however, tend to largely individualize vulnerability and render invisible the larger structural and institutional mechanisms that render groups and individuals vulnerable in a political economy of AI.
To broaden this discourse around AI vulnerability, this article explored the notions and concepts of vulnerability that underpin the development, implementation, and use of AI systems. Drawing on data from a qualitative case study on AI fall detection systems that are used in elder care settings, our analysis explored how old age is practically vulnerableized through the specific institutional, cultural, and systematic circumstances of AI development.
First, our empirical analysis highlighted that vulnerability in the context of AI can be largely understood as data vulnerability. One of the most important aspects of vulnerableizing older adults in the analyzed assemblage concerned the question of representation of older adults’ everyday lives in the training data used to train AI models. This is in line with recent advancements in the sociology of AI that have raised concerns about age bias and ageism in AI (Stypinska 2021, 2022). Chu et al. (2022) highlight that there is currently not enough data from older adults available for training AI models and that existing data sets often show an explicit or implicit age-related bias. This was also found in our study, which also problematizes how practices of synthetic data creation further enhance the age bias of existing data sets.
Second, our analyzed case highlighted how older adults also perceived user vulnerability because they were largely positioned as passive users of AI, which further marginalized their role in the analyzed vulnerability assemblage. Whereas developers and care staff were attributed active roles in creating the assemblage under question, older adults were largely put into a passive position, mainly by having no opportunity to interact with the system on their own terms. These practices of rendering the active engagements and agency of older adults invisible through technologies has been well documented in critical literature on aging and technology, where the phenomenon of the “invisible older user” has been discussed in-depth (Chu et al. 2022; Mannheim et al. 2022; Rosales and Fernández-Ardévol 2019) to describe how ageist assumptions about older adults as frail, vulnerable, or incompetent with regards to technologies are inscribed into technical systems.
Third, our analyzed case enables us to learn important lessons about more-than-human vulnerability. Our data highlight that both—AI systems and older users—were in fact vulnerable because they were both open to harm and/or malfunction and in need of care from different actors (might it be care workers or technology designers). We argue that in the context of AI, there is power in acknowledging this vulnerability of AI, particularly in times where discourses around AI position it as an autonomous, objective, and powerful solution to the pressing problems of aging (Gallistl et al. 2024). Such notions of shared vulnerability—between humans and nonhumans—move away from an interventionist gaze on AI and instead remind us that vulnerability is an “inherent characteristic of technological cultures” (Bijker 2009), even though this technological tends to be black-boxed by ideologies that promote technological objectivity and superiority (Boyd and Crawford 2012). A perspective on shared vulnerability in the context of AI would move away from such powerful, interventionist discourses around AI and instead highlight the importance of care for the nonhuman that characterizes human-machine interfacing (Lipp 2023).
The article also sheds light on the politics and ethics of classification in the context of AI. One major aspect of vulnerableizing old age in the present case included the question of which practices of older adults would sound an alarm and which ones would not. Because training data largely did not match the lived realities of older adults in LTC facilities, frequent false alarms would classify numerous activities of older adults as risky (and potentially contributing to their vulnerability). This highlights how classifications through AI systems are not neutral representations of reality but “a deeply moral project often implicated in social stratification” (Joyce et al. 2021), given that these (mis)classifications tended to have consequences for older adults in their everyday lives.
Conclusion
This article contributes to an emerging sociology of AI through highlighting how AI is socially shaped in practice. Although AI as a socio-technical system is often black-boxed and algorithms often remain opaque to those humans interacting with it, our study adds significant knowledge on how AI is developed, implemented, and marketed for user groups that are perceived as vulnerable. Also, it demonstrates how AI practitioners make moral appeals, in the present case, to protect the vulnerable old people, to shore up support and legitimacy for their work.
Most importantly, our study highlights the value of data on vulnerable populations in a political economy of AI that can be exploited for profit through AI companies. This contribution adds to recent advancements in sociological studies on AI by showing how vulnerability is exploited in a political economy of AI that understands the absence of data on vulnerable groups as a possibility of profit making and exploitation rather than a chance of fostering inclusion and equality (Carver 2023; Carver and Mackinnon 2020).
In our empirical case in particular, this exploitation could be observed in a twofold way. First, the synthesis of data to train the deep learning model establishes both a database of valuable events—albeit not real—and a—more or less—functioning deep learning model. Although both components were perceived as incomplete and could even be considered as vulnerable, in combination, they are functionable enough to create value for the AI company because the AI fall detection system can be put to use in real LTC settings and has been acquired by LTC providers.
This, in turn, leads to a second instance of exploitation of a population that is perceived as vulnerable. Despite the vulnerability of the AI system, triggering false alarms and disrupting routines in LTC settings, the actual use of the system in such a real setting meant that the AI company could use the data of triggered events to tweak the deep learning model, enhance their database, and improve their detection rate while reducing the false-alarm rate. From a political economy perspective, the behavior and daily lives of older residents, and in particular, a problematized and vulnerableized behavior that leads to an alarm by the AI fall detection system, becomes an immaterial commodity that creates significant economic value for the AI development company. Thus, for the providers of this AI fall detection system, to be able to constantly increase a database of actual events means that they are also able to constantly increase the value of both their AI system and their overall company. With this article, we thus also contribute to a critical sociology of AI and a critical political economy of AI, which for long has been discussing the accumulation of immaterial commodities through data (Fuchs 2011), or as the outcome through the scraping of the net to build immense image databases for machine learning (Crawford 2021). Our contribution is thus an empirical analysis of the exploitation of immaterial commodities, by vulnerableizing a group and particularly their behavior, while at the same time collecting their—scarce—data to create value.
Our key takeaway messages are hence twofold. On the one hand, they concern the political economy of data and the role of older adults as technology users in this political economy. Enhancing the active role of older adults in technology development and (even more importantly) implementation arises as a key issue as AI systems are increasingly developed and implemented to monitor, classify, and surveil older adults. On the other hand, our contribution highlights the need to further think about human-technology relations not in terms of intervention but in terms of shared vulnerability. Highlighting vulnerability as a more-than-human phenomenon enabled us to make visible how the vulnerability of some (particularly older adults) is hyper-visible while the vulnerability of others (particularly AI systems) is black-boxed and remains unseen. Focusing on these instances of shared vulnerability hence offers new ways of conceptualizing human-technology relations that go beyond seeing technology as instruments and older adults as problems to be solved (Peine et al. 2021) but, rather, makes visible the caring practices that exist between technologies and older adults.
Ultimately, our research relied on a multiple-perspective qualitative interview design (Vogl et al. 2018), which allows for a complex analysis due to the multiple actor groups and their views and perspectives that were included. Yet in our study, we nonetheless had to limit the actor groups mainly to developers, carers, and LTC residents. The inclusion of other actor groups, such as residents’ relatives and care home upper management, were not possible due to our limited resources and lack of access. These additional perspectives, however, would have significantly contributed to our findings and the analysis of the practice of vulnerableizing older adults through AI systems and commodifying such a projected vulnerability to increase value. Indeed, particularly the role of the care home providers is largely unexplored, and it remains unclear to what extent their policies and (economic) operation of their care homes contribute to the observed practices of vulnerableization and exploitation. Future research can certainly add empirical and analytical clarity in this regard.
Footnotes
Acknowledgements
We acknowledge support by Open Access Publishing Fund of Karl Landsteiner University of Health Sciences, Krems, Austria.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Work on this article has been funded by the Vienna Science and Technology Fund and the State of Lower Austria through Project ICT20-055 (Grant ID: 10.47379/ICT20055).
