Abstract
In recent years, the AI industry has shown growing interest in addressing challenges associated with ageing. A wide range of devices and digital artefacts have been developed to support the health needs of older adults, enhance their mobility, and improve the conditions of older workers. However, discourse on ageing in the broader tech industry—and, by extension, in AI—often adopts a paternalistic stance and relies on harmful tropes about older users. In this paper, we analyse corporate communications in the AI industry about ageing, on the premise that discourse shapes how technologies are designed. The analysis was conducted between January and July 2024, focusing on global companies that develop AI technologies for healthcare, mobility, and recruitment relevant in the European context. We found that dominant narratives about older people legitimise techno-solutionism. By framing older adults as a societal burden and overlooking their well-being, these narratives make technological “fixes” appear socially acceptable even when they offer insufficient support.
Introduction
The importance of discourse has been well-established in the context of oppression and discrimination of marginalised populations. In the context of ageing, policy discourse analysis has examined how digital inclusion policies frame older age (Carlo & Sourbati, 2020) and entrepreneurship-related narratives surrounding inclusion and exclusion (Stypińska, 2018). It has been shown how generalist media frames older people as vulnerable and burdensome (Amundsen, 2022). An analysis of digital media and technological products in advertising has shown that older adults are often portrayed in minor or peripheral roles, lacking positive attributes (Ivan et al., 2020; Ivan & Loos, 2023). Discriminatory practices in the industry's technological innovations are well-documented (Loos et al., 2021)
Despite these contributions, empirical research on corporate discourse about older users in the AI industry remains scarce. This gap is notable given its relevance for policy (Gallistl et al., 2020), the growing number of older people using technology alongside a limited understanding of their needs (Peine et al., 2014), and the potential of such discourse to shape technologies through ageism and design bias (Mannheim et al., 2023). As far as we can tell, most existing analyses rely on interviews or focus groups (e.g., Xavier & Do Nascimento, 2023), while document-based work has centred on policy, media, and advertising rather than public-facing corporate communications in AI-related topics (Savanta ComRes, 2020).
We focus on documentary evidence from AI industry practice. Our primary research question is: how are older people portrayed within corporate discourse in the AI industry? Furthermore, given that ageist depictions are the norm but ageism is widely disregarded and deprioritised in research, industry, and society (Officer & de la Fuente-Núñez, 2018), we then specifically ask: how—and to what extent—do tech discourses in healthcare, mobility, and recruitment reinforce age stereotypes? Concerning that, this paper presents a narrative that emerged from our findings, connecting ageism with a well-known feature of tech industry discourse: techno-solutionism.
Theoretical Framework
For this study, we draw on the rich body of work within critical gerontology studies, focusing on the representation and construction of age, as well as critical technology studies on techno-solutionism. Critical gerontology is a research programme that applies a critical lens to the study of ageing (Katz, 2000, 2003). In addition to being a biological datum, age is socially constructed: throughout one's lifetime, specific roles are expected to be fulfilled and activities performed (Baars, 1991). Age has been constructed differently throughout history and in different cultural and geographical contexts. Here, we refer specifically to how age is constructed in English-language corporate communications from internationally operating AI firms.
One of the ways age is constructed is through discourse: older people are described through, and treated by, a set of familiar tropes. In an English-speaking, Western context, older people are typically constructed as lonely (Katz, 1996), a burden to society (Burema, 2022; Graham, 2022; Meisner, 2021; Soto-Perez-de-Celis, 2020; Swift & Steeden, 2020), and in need of care (Duh et al., 2016; Willard et al., 2018).
These tropes have to be understood within a capitalist, heteronormative context that constructs the life course as a series of phases where individuals are expected to produce and reproduce. Regarding production, life stages are typically divided between education, active work life, and retirement (Kohli, 1988). These phases are tied to age milestones defined by social expectations, such as entering the workforce, having children, and retirement (Hooyman et al., 2008). Ageist tropes such as the presumed inability to learn, social isolation, or being a burden stem from a life-course narrative that, once older adults finish their reproductive and working roles, renders them expendable within the social order. Visual representations of these life phases also play a role in reinforcing such discourses (Loos & Thijssen, 2022).
In this paper, tropes, as representations of age, serve as a starting point that informs our use of Critical Discourse Analysis (Section 3) and orients the focus on ageist discourse in Section 4. For this discussion, it is essential to note that, since “older” and “retired” are often treated as synonymous, those who design technologies professionally are typically not perceived—and do not perceive themselves—as older. They are, therefore, designing for an imagined “Other” which, as Hall (1997) explains, reinforces difference from the reference group through broad generalisations and stereotypes that reveal how power operates within culture. Thus, how those involved in the design process imagine this “Other” exposes how social constructions of age emerge and how partial reliance on familiar tropes persists.
Techno-Solutionism
Within critical technology studies, techno-solutionism is considered an ideology that recasts “all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimised – if only the right algorithms are in place!” (Morozov, 2013, p. 5). The term has a strong relationship with “technological fix”, a term coined by Weinberg (1966). Weinberg argued that all modern social problems would be solved by technology. This idea gained attention in the tech industry and “is now implicit in modern entrepreneurial culture” (Johnston, 2020). But the tech industry has also altered, in the last decades, not only the way we communicate and produce, but also the way we imagine and expect “what technology more broadly can actually deliver” (Mitra et al., 2023, p. 1). Moreover, the “techno-solutionist innovation paradigm (…) seeks to connect trust in automation with human ethics” (Pink, 2022, p. 48). Furthermore, AI technologies have reinvigorated solutionist belief (Nachtwey & Seidl, 2024, p. 107). By contrast, academic work has found reasonable criticisms of technological determinism (Wyatt, 2023).
Lindgren and Dignum (2023) presented several arguments to assess techno-solutionism. Solutionism ideology conceives complexity through a reductionist and simplicity lens, “seeing socio-political issues as puzzles that can be solved, rather than as problems that must be responded to in a multitude of potential ways” (Paquet, 2005); solutionism evades the communicative framing of problems, considering that “discursive articulation of problems matters just as much, sometimes even more, than how they are resolved or not” (Laclau & Mouffe, 1985); solutionism diminishes the symbiotic interdependence between society and technology, “that can make its believers blind to the fact that society and technology are mutually shaped” (Šabanović, 2010). As a result, solutionism can “move down a very dangerous path if we see technology as something that can ‘fix the bugs of humanity’”
AI bias constitutes a problem that “remains endemic across technology processes” (Schwartz et al., 2022). We recognise that AI bias can potentially perpetuate and amplify existing societal stereotypes. Ferrara (2024) names it “the butterfly effect in AI”. Particularly, this effect exacerbates age stereotypes and ageism, which are significant in the population because they influence how people interact with each other and become structurally embedded in institutions and digital systems (Rosales et al., 2023). Indeed, ageism is embedded in the design, development, implementation, and data of AI systems (Chu et al., 2023; Stypinska, 2023). Neves et al. (2023, p. 1280) coined the term “sociotechnical ageism” to highlight the assemblage of social, technological, and physical/material contexts and processes beyond the “digital” to better understand ageist phenomena. Accordingly, a critical examination of AI's potential biases is imperative (Bird et al., 2023), particularly age bias in technological solutions for recruitment, healthcare, and mobility—areas of focus in this study.
Methodology
This study employs Critical Discourse Analysis (CDA) (van Dijk, 2015) to investigate discourses on age in the AI-tech industry, in three areas of study: healthcare, mobility, and recruitment. CDA is a research tool used to critically assess social discourses, demonstrating how they either reinforce or challenge social structures and power in society (van Dijk, 2015, p. 467). This research tool enables the analysis of how discriminatory or stereotypical discourses or metaphors are used to describe ageing and older people. Consistent with reflexive traditions in CDA and critical gerontology, we use CDA based on its contextual and interpretive practice. The focus, therefore, is on uncovering how power relations and ideologies shape discourse. We began by defining digital ageism in the tech industry as the social problem under study, and then selected three sectors and the corresponding documents for analysis. We then identified the dominant recurring age-related tropes within these documents. Finally, we collectively interpreted them through the lens of critical gerontology and critical technology studies, until reaching a consensus among the authors.
The three focus areas were chosen based on two criteria: societal relevance and data availability. Healthcare, mobility, and recruitment are at the forefront of the development and use of machine learning tools, whose applications intersect meaningfully with age. Other rapidly increasing areas, such as credit scoring and generative models, although necessary from a societal perspective, produce less written corporate content and were, therefore, excluded from the analysis.
Data collection and interpretation of documentary sources were conducted between January and July 2024. We analysed web content related to AI-based products and services relevant in the European context, which included discourse potentially relating to older age, either explicitly or implicitly. We reviewed a corpus of 271 documents (174 for healthcare, 60 for smart mobility, and 37 for recruitment) and analysed 179 (116 in healthcare, 43 in smart mobility, and 20 in recruitment). The numbers presented here are intended as a general illustration of the information. Given the breadth of media and the diversity of communication styles across industries, we allowed for individual judgment in the choice of texts to analyse, and each author stopped when saturation was deemed to have been reached. It was a time when no substantially new tropes or framing strategies were emerging. By “document”, we mean any kind of text used in corporate communication. We analysed documents in the following categories: press releases, ethical guidelines, web copy, mission statements, strategy documents, white papers, reports, case studies, product descriptions, and marketing material. The documents varied in length and impact. Of course, the length of documents does not correspond to their semiotic weight: a short, but very well-known slogan can have more impact on discourse than a lengthy report. We did not conduct a systematic review because our aim was not to exhaustively map the field, but to identify recurring discursive patterns in influential industry communications. The selection thus prioritised semiotic richness and discursive salience over comprehensiveness.
For the healthcare domain, we began with an initial corpus derived from five major tech companies (Amazon, Nvidia, Google, Meta, and Microsoft), whose health ventures set the dominant industry narratives. Documents were sourced from public-facing materials, including press releases, blog posts, product pages, and strategy documents. We did not restrict inclusion based on explicit mentions of age at this stage, as we aimed to understand broader framings of ageing and care. Following this, we expanded our corpus by including startups working in AI-powered healthcare, with a focus on the explicit framing of older age. Selection was conducted through purposive sampling, identifying startups via databases such as Crunchbase and industry media, including Sifted, followed by a manual screening of corporate websites. The presence of discursive constructions of ageing in marketing, product descriptions, or mission statements determined inclusion. We reviewed 174 healthcare-related documents and conducted a more in-depth analysis of 116 of them. These included various text types, such as press releases, white papers, blog posts, and case studies.
For mobility, we identified six key innovative mobility technologies (driverless cars, navigation systems and devices, smart parking, public transport prioritisation, car sharing, and automated lane keeping systems). The first set of companies developing these technologies was identified through purposive sampling, and the rest of the sample was expanded through snowball sampling, following the mention of corporate literature. A total of 60 documents were identified, including web copy, press releases, reports, white papers, product descriptions, and blog posts; 43 of these documents were reviewed in depth. To situate these technologies within a broader landscape, we examined context documents, including an EU policy briefing on smart cities and urban mobility, as well as specific documents related to relevant local projects.
For recruitment, 37 popular companies that provide recruitment platforms significantly impacting human resource management were analysed. The selection criteria were as follows: first, we selected leading companies (such as IBM, Cornerstone, Oracle, or LinkedIn); then, we included AI recruiting solutions promoted by prominent software platforms, sourced from Forbes, SelectSoftware Reviews, and the G2 Software Marketplace. Text on the landing page of the platforms’ websites and the pages dedicated to the selection process were analysed. We primarily considered those pages that included the terms “age”, “hiring”, and “bias” in the description of features or functionalities. However, we also included pages that were implicitly related to the older population and recruitment diversity, equity, or inclusion values, including other wording or images related to these topics. Approaches to mitigate ageism and ethics, including government regulation, organisational standards, and technical due diligence (Hunkenschroer & Luetge, 2022) were considered during the analysis.
Findings: Ageist Tropes in AI Industry Discourse
This section provides an overview of the representations of age in our data, leading into the analysis of techno-solutionist narratives. The discourse surrounding ageing, particularly within healthcare, mobility, and recruitment, often revolves around familiar ageist tropes that are sustained and then purportedly solved by advances in AI. By ageist tropes, we mean, in alignment with the literature in critical gerontology studies, recurring narrative elements that portray older people in a negative light, often informed by stereotypes. For instance, older people are frequently framed as undesirable, lonely, or burdensome. Ageist tropes are present in social discourse, policy, media, and, as we argue in this paper, corporate communication. Examples of relevant tropes can be found below. Following our main research question — “How are older people portrayed within corporate discourse in the AI industry?”—it becomes clear that they are often represented through established ageist tropes. Consequently, we address the subsequent question—“How, and to what extent, do tech discourses in healthcare, mobility, and recruitment reinforce age stereotypes?” Our findings show that the widespread use of these tropes reinforces stereotypical views of older people. Moreover, our analysis reveals that such tropes are not only pervasive but also take on nuanced forms across different sectors. Before examining the specific findings from each domain, it is essential to outline the general patterns of these representations and their broader implications across various discursive fields.
One of the most persistent tropes identified in our analysis is the depiction of older individuals as inherently vulnerable and frail. This portrayal is deeply rooted in societal attitudes and is often reflected in marketing technology products and services. In healthcare, this vulnerability is emphasised with a focus on physical decline and the increased risk of medical conditions, suggesting a near-inevitable deterioration of health with age. Technologies such as AI-driven monitoring systems and predictive health analytics are marketed as essential tools for managing these risks, reinforcing the notion that ageing is a problem that requires a technological solution. Research such as that by Ayeni et al. (2022) critically examines how such narratives not only oversimplify the ageing process but also fail to recognise the diversity and capabilities of older adults.
The trope of dependence and burdensomeness is closely linked to the trope of vulnerability, portraying older adults as reliant on others for daily activities and decision-making (Jahn et al., 2013). In technology development, older people are often perceived as “laggards” (Loos et al., 2021), not contributing to the AI development process, despite being “actors who use, modify, and sometimes produce technologies” (Peine & Neven, 2019). The association between burdensomeness and vulnerability has also been noted across the wider literature (Amundsen, 2022). In the mobility sector, this trope manifests in the promotion of AI-enabled devices and vehicles that promise to compensate for perceived deficiencies in older individuals’ ability to navigate independently. The narrative suggests that older adults would be unable to participate fully in society without technological intervention. Such framing not only marginalises their agency but also overlooks the potential for environments and products to be designed inclusively from the outset.
The trope of loneliness is one of the most culturally pervasive, cutting across both historical and contemporary discourses about ageing. Older adults are often perceived as socially isolated, frequently as a direct consequence of the previously mentioned vulnerabilities and dependencies. In the context of recruitment and workplace integration, this can lead to biases that discourage the hiring or retaining of older employees, presuming them to be out of touch or less capable of teamwork and adaptation. AI-driven social platforms and communication tools are frequently offered as solutions to this “problem”, suggesting that technology can replace or enhance human connections for older adults (Jentoft, 2023).
Each trope both reflects broader societal biases and operates as a rhetorical device that positions AI as a necessary remedy for problems those narratives help construct. Such tropes cannot be mapped cleanly onto the “third” or “fourth” age (Gilleard & Higgs, 2010). Although they sometimes lean towards fourth-age connotations, in practice they are applied as standalone markers that ignore the broader context of people's lives and, at times, are even attached to third-age notions—such as being a pedestrian or an older worker.
Smart Mobility
Within mobility-related discourse, older people are framed as burdensome in terms of their purported inability to use technology, limited mobility, perceived danger on the road, and increased healthcare needs, particularly when these factors intersect with mobility issues.
Being purportedly unable to use technology pushes certain companies to produce products that allow older people to delegate transport to others. Uber allows younger relatives and carers to arrange travel for older people; the product is not targeted at older adults directly, but at their support network, reinforcing the perception of older people as dependent and burdensome to others: “Don’t worry about not being able to commute with your friend or family who requires assistance, you can make use of the GPS tracker on the Uber app to ensure that the rider has been picked up, is on route, and dropped off.” Uber (2018, section“Tips on Ordering an Assist ride on Uber”, para. 1)
Limited mobility is often framed as a city or community-wide problem to be addressed through technology. In this framing, older adults appear as a group whose mobility is constrained by infrastructure gaps, and the response, as articulated by Nokia, emphasises technology-centred interventions to fix those gaps (Nokia, n.d.).
Healthcare and mobility intersect in the need for older people to access health facilities, be reached by formal or informal home care providers, and be able to move around. The technological fixes offered for these issues are driverless cars and car-hailing apps, with remote control facilities designed to compensate for impairments that make driving more challenging. “For those who cannot or choose not to drive because of age, infirmity or any other reason, we are working to develop automated driving technology that will allow a vehicle to drive on its own without human oversight or fallback responsibility. We call this eventual capability Toyota Chauffeur.” Toyota (2020, p. 11)
Older people are often framed as dangerous road users in two different ways, both related to health, but with different outcomes based on whether they are framed as vulnerable pedestrians or incapable drivers. When framed as vulnerable pedestrians, older people are described as obstacles, which may cause accidents simply by being on the road. While never explicitly acknowledged, the framing of older people as obstacles becomes apparent through their juxtaposition with other categories of people who are typically framed as potential victims of road accidents: disabled people and children. When corporate materials portray older adults as less capable of driving safely—e.g., more nervous, prone to slower responses, or facing age-related impairments—they often position technology as the solution. In the case of autonomous mobility, the promise is the substitution of the driving task for those who “can’t drive”, explicitly including older adults, who are grouped alongside “blind” or “disabled individuals”: “Autonomously driven vehicles could also help people who can’t drive—whether elderly, blind, or disabled—to get around and do the things they love” Waymo (n.d.), section “Why is Waymo working on fully autonomous vehicles?”, para. 2) “For elderly drivers who are not confident in their responsiveness, the system can offer various support including operational assistance. Our next-generation driver-assistive technologies will enable all drivers to drive with peace of mind” Honda Motor Co., Ltd. (2021, section “Confidence, in Addition to Automation, is Needed to Liberate People from the Risk of Collisions”, para. 8)
A commonly cited figure is that road traffic injuries are the leading cause of death for people aged 5–29 (World Health Organisation, 2023). Corporate materials frequently highlight this to motivate urgency. For instance, an article on the Yunex Traffic website states: “A shocking number of road accidents still occur worldwide. Young and vulnerable road users are often particularly affected. Hard numbers show the sad reality: 5–29 year olds face road traffic injuries as the leading cause of death.” Yunex Traffic (2024 section “The sad language of accident statistics”, para.1)
The age range, 5–29, is presented in a pull-out quote; the two numbers alone occupy as much space on the page as the rest of the text. The death of younger people is considered a strong enough motivator to be picked out of any other negative consequence of poor road use. When older people are cited as potential victims, they are part of a string of markers, which usually includes disabled people. Whether older people are perceived as obstacles or potentially dangerous to others, the framing remains the same: they constitute an unwelcome presence on the road, which needs to be managed.
Healthcare
Narratives in AI technologies for healthcare frame older age, medically, as a malaise, and socially, as a burden.
The framing of older age as malaise is pervasive in the industry, starting with big, influential players such as Google. Google has a health research initiative that researches to inform new product developments in Google Health and improve the existing ones. In a blog post on a new deep learning model developed to predict an individual's age from the retinal image, the researchers define ageing as the following: “Aging is a process that is characterized by physiological and molecular changes that increase an individual's risk of developing diseases and eventually dying” (Ahadi & Carroll, 2023, para.1). This framing exemplifies a biomedical reduction of ageing, which flattens the experience of old age into a condition defined primarily by risk and degeneration. It reflects a broader industry trend that casts older age as a pathological deviation rather than a phase of life.
As startups adopt Google's models and APIs, such as Google Cloud's Healthcare API, which these technology players utilise, similar medical and disease-oriented definitions of ageing are also adopted by nascent startups.
The medicalisation of older age paves the way for technology-enabled surveillance within healthcare. Older people are then deemed in need of being surveilled, monitored, and tracked. As Chu et al. (2023) argue, AI systems are frequently shaped by embedded age-related bias, and the assumption that older people require constant monitoring exemplifies this sociotechnical prejudice. Their review highlights how ageism is not only reflected in algorithms but also in the institutional practices and narratives surrounding their use.
Fall detection is a good example of medicalised surveillance. Older people are often portrayed as frail and prone to wandering; therefore, they require monitoring through camera systems, motion-sensitive lamps, and AI-powered sensors installed on the floor to ensure their safety. Floor Motion, an AI-powered fall detection firm, promises to monitor the gait of older individuals and notify them of any changes in their walking patterns or movements within their living area. Their website states that their “system learns what is normal for the resident and alerts the family when something is not normal” (Floormotion, 2024, para.3). The detailed surveillance of another startup, SensFloor, prides itself on “transforming your entire floor into a giant touchpad”
The other prominent narrative within the AI healthcare industry portrays older people as a social burden. This can been seen in applications and services from big tech firms, such as Amazon's Alexa Together, a subscription service supported by Alexa-enabled devices, including Amazon's Echo, and start-ups like ElliQ, which produces companion robots marketed as providing companionship and care for older adults.
Technological solutions such as Amazon's Alexa Together represent the notion of providing peace of mind, suggesting that older adults, as a group, are often perceived as being burdened by dependence, loneliness, and vulnerability, as they are frequently seen as needing constant oversight and assistance. This tool, along with others like it, is marketed to provide support through voice commands and alerts. The underlying message is clear: older individuals are seen as incapable of managing independently, thus requiring constant technological oversight and intervention to maintain a basic quality of life. As Rosales et al. (2023) note, this digital ageism becomes structurally embedded in systems that present themselves as neutral or helpful but perpetuate a view of older age as inherently deficient.
As an AI-powered caregiving service, Amazon's Alexa Together promises “peace of mind for you. Independence for them” (Amazon, n.d., “Introducing Amazon Alexa Together” heading). The services promising more independence for older adults promote the discourse that older people could not truly be independent even with the device Amazon offers: it includes a feature called “activity feed” where the caregiver or family member can “know how they’re doing at a glance” as with “Alexa Together, you can create activity-based alerts that let you confirm your loved one's well-being with a glance at your phone. It helps keep you in the loop” (Amazon, n.d., section “Activity Feed”, para. 1–2).
The burdensomeness of older people extends to their social needs. ElliQ, “an AI-powered companion designed to support and accompany older adults on the journey to age independently, while reducing loneliness and isolation”, promises “Alleviating loneliness & empowering independence” (ElliQ, n.d., FAQ/hero tagline). Although offering health monitoring and physical activity videos, ElliQ distinguishes itself from other AI healthcare startups by being a conversational AI companion that can initiate and maintain detailed conversations with older adults. The subtext here is that, thanks to technology, families do not have to worry about keeping older people company.
Recruitment
So far, we have examined discourse within fields where ageism is rarely acknowledged, confronted, or addressed. Recruitment constitutes an interesting counterpoint because age bias is a known issue, and AI is often, as presented in many of the documents we reviewed, touted as a solution to it. In smart mobility and healthcare, AI is deployed to solve problems that can be age-specific or related to ageing. Issues in recruitment, however, are explicitly concerned with age discrimination.
As attested by statements such as “Eliminating bias against older workers is not only the right thing to do, it's good for the economy” (BambooHR, n.d., section “Ageism”, para. 1), age discrimination in recruitment is framed as an ethical, organisational, and cultural issue and as a financial advantage. The DEI (Diversity, Equity, and Inclusion) framework, well-established in AI companies, is framed as motivated by financial incentives: “Inclusive teams are over 35% more productive” (Manatal, 2023, “6 Key Diversity Recruiting Metrics to Improve”, para. 1). Therefore, the focus on productivity seems distant from a human rights perspective (Hunkenschroer & Kriebitz, 2023) in the AI recruiting industry.
When dealing with the narrative in AI applied to recruiting, minorities in general, and older people in particular, are framed as vulnerable. The vulnerability trope is present in several phases of the recruitment process, including sourcing, screening, and interviewing.
Vulnerability emerges in the discourse around sourcing, specifically in finding suitable candidates for a job post, when it is assumed that there are equal job opportunities in the labour market. LinkedIn offers HR managers a chance to “find qualified candidates quickly on LinkedIn's global network of over 1B professionals” (LinkedIn, n.d., “Recruiter features”, para. 1). However, older individuals (over 55 years old) represent only 3.8 percent of LinkedIn users (Dixon, 2024), thus critically limiting the visibility of older candidates on the platform. Furthermore, when conceiving of job seekers as incomplete candidates that must be fleshed out somehow, some companies promote the profile enrichment feature to have more information about candidates: “Within a few clicks, you’ll get to know candidates beyond their CVs” (Manatal, 2023, “6 Key Diversity Recruiting Metrics to Improve”, para. 4). This feature enables human resources managers to automatically collate personal data from users published on external websites or social media platforms, thereby facilitating a more comprehensive understanding of the candidate beyond that achieved through a resume analysis. Therefore, how candidates present themselves is irrelevant because AI completes a candidate's profile according to the needs of recruiters. This functionality may compromise the privacy of candidates and expose job seekers, for example, by utilising facial recognition and social networking platforms. Finally, the trope of older adults as technologically vulnerable is prevalent in the AI recruiting discourse. Recruitment companies’ blogs are replete with tips on crafting a strong resume and achieving success. This self-help approach often burdens the candidate, who is expected to adapt to the new technological landscape.
Vulnerability in screening refers to how filters operate in several steps, considering potential candidates who may be affected by age bias or stereotypes during the process. Companies claim to offer innovative, user-friendly screening solutions that are free of age bias. They include, for example, anonymisation of resumes to automatically remove personal or demographic information, such as real names, photographs, or genders (Greenhouse, 2019). Vulnerability in interviewing refers to the AI solutions that companies promote to conduct fast and efficient interviews with candidates. For example, AI video interviews that include the analysis of a psychological aspect of candidates (Interviewer.ai, 2021); the anonymisation of evaluation processes to prevent team members from seeing “each other's evaluations in selected stages” of recruitment and avoid influenced opinions (Recruitee, n.d., “Fair evaluations”, para. 1); conversational interviews with chatbots (Teamtailor, n.d.); or the use of third-party applications like ChatGPT (BambooHR, n.d.) to write, analyse, and engage with candidates.
Utilising the mentioned functionalities in sourcing, screening, and interviewing without critical adoption and human intervention can promote unconscious bias against older adults. First, since these features are sometimes optional or not included in specific plans, their application is uneven, reliant on the decisions of system administrators and the company's financial constraints. Secondly, they are presented as new, exclusive, and innovative for the industry and must undergo thorough testing. Consequently, they may cause errors, and new age bias could emerge. Thirdly, assuming that it is well-established that individuals tend to delegate decision-making to recommendation algorithms when faced with difficulty in making a choice (Green, 2022), a multiplier effect can occur. Finally, it is also problematic because companies must provide transparent information on how the recruitment tools work, which they need to do. Without system transparency, users cannot assess whether the AI recruiting guided tasks are appropriate and consistent with the promoted inclusive discourse.
Discussion
In this paper, we have argued so far that corporate communication in the field of AI frequently embeds and upholds narratives about old age that are also found elsewhere: older people are often framed as vulnerable, burdensome, and frail. At the same time, some ways of constructing old age are industry-specific: they intersect with the particular product being developed, both in the general sense of digital technologies and through specific applications that are particularly relevant to older age, or whose design varies depending on age.
Narratives surrounding age and technology are well-established: across the board, younger people are depicted as tech-savvy, adaptable, and able to learn new technologies; older people are depicted as less savvy, or even as needing support to use new technologies. In this paper, we have focused on how corporate discourse frames technologies as fixes for problems that affect older people, sometimes intersecting with the tech-ineptitude narrative described above.
Issues affecting older people are, in other words, framed as amenable to technological fixes. The solutionist attitude of reducing social problems to thin, focused interventions (see the section on techno-solutionism) becomes less questionable in the context of framing older people as problematic, vulnerable, and disposable. In the following, we will describe how this operation is carried out in the discourse.
The general narrative operates in three moves: (1) framing the problem, (2) constructing the user, and (3) rhetorical payoff in terms of selling points. At each stage, different narratives and rhetorical devices are deployed, making it acceptable and even desirable to adopt a minimal-effort approach to addressing age-related social issues, i.e., a techno-solutionist perspective.
How Ageist Narratives Uphold a Techno-Solutionist Attitude
Framing the Problem: Older People as a Scary Horde
Some version of the following statement is a common sight at the start of documents about technology and older age: “Driven by declining fertility, increased life expectancy, reduced birth rate, and migration, the global population aged 60 years and older is growing quickly and is estimated to reach over 1.6 billion by 2050. Given the projected growth of this population, which will occur in virtually every country, governments, health systems, and the private sector should prepare to address the older population's needs.” World Economic Forum (2021, p. 4)
This passage strongly emphasises quantity: there is a large number of older adults, which, for reasons that will be analysed further in the next section, is considered, in and of itself, a problem. Before delving into the negative framing itself, we can gauge the effect of statements such as the one above by considering how alarmist claims about demographic change, when referring to children and young people, tend to have the opposite effect: expressing concerns about a dwindling younger population and lamenting low birth rates.
Depicting a group as alarming carries an affective charge, inducing anxiety and concern in the reader. It is no coincidence that the trope of the scary horde is extensively employed in cultural artefacts such as zombie films and video games to the same effect. Evoking a state of anxiety when presenting a problem to be addressed paves the way for overblown or urgent solutions. The scary horde, again as clearly shown in the limiting case of the zombie, is inherently othering (Lizardi, 2009), dehumanising, and can, therefore, be leveraged to justify social interventions, for instance, in immigration policy (Huot et al., 2016).
The Construction of Older Age
To frame a large group of people as a scary horde, as opposed to, say, a festive crowd, specific qualities ought to be attributed to its members. Large numbers of zombies are concerning because of their known tendency to eat human brains. In the case of older people, their number creates concern due to the role they are expected to play in society. Within the technological discourse we analysed, two particular framings are relevant: older people as vulnerable or diseased, and older people as a problem to be fixed.
Vulnerability, from the point of view of society as a whole, puts older people in a position of needing care: if they are too frail to take care of themselves, then younger people, as a consequence, ought to provide support. Hence, older people are constructed as a burden. Conversely, as we learnt from recruitment, it seems that older people are not deemed as able to contribute to society as everyone else; hence, tending to their needs does not have, within this framing, a collective advantage. At best, older people are framed as superfluous; at worst, burdensome—or even dangerous, as in the case of mobility.
A counterpoint might help bring the above into further relief: if, say, older people were framed as wise counsellors, then the duty of care placed on the rest of society would not be seen as a burden, or something that needs to be alleviated. Contributing to society justifies being supported by society.
To illustrate the relevance of this concept to techno-solutionism, it is helpful to consider the distinction between illness and disease. Disease refers to a biomedical condition, while illness describes how a person experiences that disease within their specific circumstances and environment (Kleinman, 1988). In this respect, a disease's social and cultural experience can differ radically from one place to another, even in the case of seemingly the most uniform diseases like cancer and Alzheimer's (Caduff & Surawy Stepney, 2020, december 17; Cohen, 1998). In this respect, the ageist discourse equating ageing with disease and eventual death reduces older humans to mere biological organisms. As a further indication of the relevance of this discourse to the present question, it is helpful to note that the idea of ageing as illness is widespread in Silicon Valley and the broader tech community (Madanamoothoo & Schoch, 2024).
Even beyond the boundaries of care, ageist discourse and techno-solutionism share a similar reductionist attitude. The way AI solutions are presented in recruitment focuses on the selection of CVs, ignoring systemic ageism in the industry (Batinovic et al., 2023; Stypińska, 2018), where discrimination occurs, mostly, at later stages of the hiring process, as shown by lower callback rates for older candidates (Batinovic et al., 2023). In mobility, the tradeoff seems to be between social interventions addressing healthcare and community services gaps. The issue of loneliness, present in both healthcare and mobility, is treated in both cases as amenable to a quick fix by facilitating access for carers, with little consideration of the preferences and desires of older people themselves.
In mobility, when framed as obstacles, older people are implicitly reduced to their mere presence. When combined with the trope of burdensomeness, older people are treated as disposable bodies. The notion of a disposable body applies to people who are not deemed productive, hence whose bodies are superfluous to collectivity, and, by not contributing, present a burden to others: the notion of disposable body was conceptualised in the context of fatness studies, as inherently applying, in an intersectional manner, to other categories, such as migrants and poor people, who are deemed a burden by their perceived low contribution to society and consumption of shared resources (Haney et al., 2021). Framing a particular group as disposable is, of course, dehumanising, forecloses empathy, and, relevant to present purposes, allows for bracketing the needs and wants of the people in the group, leaving only a perception of the group in question as a problem.
It has to be noted, here, that old age, in terms of chronology, is defined in widely different ways depending on the field: in recruitment, definitions of older age start earlier, sometimes as low as 40, while in healthcare and mobility, older age tends to be defined as 65 and above. Such variation significantly shapes how older people are described and perceived across contexts.
However, those differences do not have a bearing on our analysis here, as the focus is not on specific framings, but on what those framings have in common, and how they are used in industry discourse to justify and promote specific ways of addressing age-related issues. In our view, the fact that a similar justification for the development of new technologies is both rooted in ageist discourse and present across sectors strengthens, rather than weakens, our claim.
Older people are, then, framed as a category that needs to be somehow handled or fixed. A distorted picture of their humanity is presented, reducing them to mere problems. Or, in other words, as a frequent target for techno-solutionism.
Justifying Technological Fixes
Simply suggesting that technology can be used to address a problem is not, in and of itself, an indicator of a solutionist attitude: claiming that technology alone will, without other interventions, is. In a solutionist mindset, the crucial move is to argue that technology is a good enough tool to solve the problem at hand. There are two parts to this argument: the first is to argue that a problem exists; the second is that technology is sufficient to address the problem.
Narratives surrounding old age do a great deal of work in constructing older people as a problem. As described above, older people are portrayed as dependent or vulnerable, dangerous, and discriminated against. The issues to be addressed are dependence, safety, and fairness.
In cases of dependence, solutions are proposed to facilitate the connection between older people and their caregivers, ranging from communication platforms to coordinated care tools. In healthcare and mobility, devices are developed to facilitate communication between older people and formal or informal caregivers. Communication can be reciprocal, where older people and caregivers exchange information, or it can be unilateral, where the older person asks for help, or the caregiver monitors the older person.
Considerations regarding safety are twofold: on the one hand, older people are often framed as potential victims (for example, in cases of falls, in healthcare, as pedestrian victims of accidents, or in mobility); on the other hand, they are sometimes perceived as dangers (when viewed as poor drivers). Technological fixes in the former area include surveillance technologies in healthcare and generic improvements aimed at reducing the rate of accidents, regardless of the identity of the potential victims. In the latter area, older people are specifically singled out as a danger, and interventions are explicitly aimed at reducing the risk they pose to others (e.g., driverless cars, sensors specifically developed to address how older people are perceived to drive).
Fairness, which is strongly present in recruitment, is particularly interesting because it acknowledges the existence of ageism and suggests technological solutions by developing systems that address age bias.
Across the board, older people are framed as vulnerable. Vulnerability takes different forms depending on the context: in the case of dependence, it means that older people require more assistance than others; in the case of safety, it informs a framing where older people can be vulnerable to accidents; in the case of recruitment, age becomes a reason for potential discrimination.
The issues above could be addressed with systemic or operational change. However, doing so would not present a business opportunity for the AI industry. Presenting older people as a burden and a problem to be fixed paves the way for the social acceptance of using technological fixes instead of devising more complex, resource-intensive strategies. In the case of healthcare, the complex needs of older adults are often reduced to mere dependencies; the primary motivations for interventions are to satisfy these needs, adopting a bare-minimum approach, which is a minimalist approach to sociability. Contrast with how we, as a society, approach tending to the needs of children: using exclusively technological means to educate and entertain children is widely frowned upon. In recruitment, rather than addressing systemic issues concerning industry practices, AI-based tools are presented as a fix, through claims such as “AI can considerably eliminate unconscious bias” (Recruitee, n.d.).
Technology is presented as the best tool because it is simple, efficient, and cost-effective. The way older people are constructed, as described above, paves the way for technological fixes. Having framed a group as disposable, it comes naturally to find the least effort intervention to handle issues impacting its members. When presenting solutions to the problems above, efficiency and cost-effectiveness are brought forward for rhetorical support. At a system level, smart technology is framed as a cost-effective alternative to large-scale interventions, necessary to address complex issues such as isolation, healthcare needs, scarcity of public services, and bias in recruitment.
Recruitment platforms, for instance, are presented as transitioning from a time-consuming work culture to an automated hiring culture. This transition is facilitated by AI, which is touted as an all-encompassing solution that promotes two key objectives: first, a new business opportunity, namely greater productivity and efficiency; and secondly, an updated organisational work culture that addresses ethical considerations within the workforce. The techno-solutionist perspective in recruitment coexists with the Diversity, Equity, and Inclusion (DEI) business framework. In this context, much of the corporate material we reviewed a primarily addresses age discrimination through a technological approach. Regarding age bias in recruitment, discourse based on a smart ideology, which is clearly business-oriented and simplifies and monitors the workforce, appears to be less prominent than discourse based on ethics, including government regulations, organisational standards, and technical diligence (Hunkenschroer & Luetge, 2022).
At an individual level, smart technologies are marketed as guaranteeing peace of mind: for older people's relatives, who can provide support with minimal effort, for urban planners and policymakers, who can de-prioritise the needs of older people, and for recruitment agencies and HR departments, who can continue their day-to-day work without radical changes in their practices. An illustrative example of this strategy is the marketing for Together by Renee, an all-in-one AI health assistant. A marketing communication that the PR manager of the company wrote in 2023 reads: “The potential impact [of the product] is enormous, especially for the nearly 100 million Americans over 50 living with more than two chronic diseases and the 53 million caregivers who often feel overwhelmed.” Business Wire (2023, para. 3)
While we do not have space to develop this here, the techno-solutionist attitude might even hide the assumption that technological fixes shield us from profound social and cultural change. If we collectively agree that the specific problems older people face are being addressed with technologies, we can avoid acknowledging ageism as a pervasive cultural issue.
Limitations and Need for Further Study
We recognise that the results presented here are partial, as they apply to a specific subset of corporate discourse on AI. We decided to focus on three sectors—mobility, healthcare, and recruitment—leaving out other, growing sectors such as generative AI, for the reasons specified above. It is also a time-limited study, examining content produced within a given timeframe. As the AI industry develops, further work will be needed to explore emerging discourse. The capacity of the researchers involved in this study also limited our sample size: larger, more systematic studies might uncover patterns we could not detect. Further study is needed in other areas of AI development, as well as the adoption of a more systematic approach.
Conclusion
Our analysis of industry discourse around old age has revealed a connection between a widespread narrative in the tech industry, techno-solutionism, and the ageist framing of older people as vulnerable and a burden to society. The two narratives complement each other's strengths: techno-solutionism benefits from complex problems that are deemed amenable to a technological fix. For technological fixes to be socially acceptable, the problem in question must be reducible to a bare-minimum solution, excluding extensive interventions in wider systems of oppression. Indeed, techno-solutionism can serve as a shield against large-scale reform by providing plausible deniability: those in leadership roles can point to the implementation of a solution, thereby preventing backlash.
In this sense, the case of ageism is not unique: other marginalised groups whose problems are deemed fixable with bare-minimum interventions, such as fat, migrant, and incarcerated people, can also be used as a pretext to develop insufficient, potentially harmful, yet lucrative technologies. Framing some people as disposable is a convenient strategy in certain pockets of the tech sector.
We are not claiming that all innovation in design is grounded in a techno-solutionist attitude, or complicit in the oppression of marginalised groups. In fact, design scholars and practitioners are actively working towards anti-solutionist techniques and methods (Blythe et al., 2016; Richterich, 2024), including age-related work (Blythe et al., 2015). These approaches constitute an interesting prompt for further investigation. Here, we are simply detecting, acknowledging, and warning against an existing attitude, with the hope of providing conscientious practitioners with additional tools for reflexive work in their practice.
We are also wary of attributing intentionality; we view discourse as a social phenomenon that is primarily independent of individual intention, instead hiding in unquestioned assumptions. Unearthing and explicitly addressing assumptions leads to fairer, more effective, and better social interventions.
Footnotes
Acknowledgments
The authors thank Justyna Stypińska, Maria Sourbati, Rüya Koçer, and Andrea Rosales for their feedback on different drafts of the paper.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by Volkswagen Foundation under grant number 9C565 + 9C565-1. The open access publication was funded by the WZB Berlin Social Science Center. The authors are grateful for the financial support that enabled the preparation of this publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Volkswagen Foundation.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
