Abstract
This article explores the historical ties between the digital welfare state and eugenics, highlighting how the use of data infrastructures for classification and governance in the digital era has roots in eugenic data practices and ideas. Through an analysis of three domains of automated decision-making – child welfare, immigration and disability benefits – the article demonstrates how these automated systems perpetuate hierarchical divisions originally shaped by ableist eugenic race science. It underscores the importance of critically engaging with this historical context of data utilisation, emphasising its entanglement with eugenic perspectives on racial, physical and mental superiority, individual and social worth, and the categorisation of data subjects as deserving or undeserving. By engaging with this history, the article provides a deeper understanding of the contemporary digital welfare state, particularly in terms of its discriminatory divisions based on race and disability, which are deeply intertwined.
Introduction
The digital welfare state is characterised by a prevailing narrative of rapid and significant change. The widespread integration of digital and data-driven technologies in social services and welfare administration is considered part of a ‘digital transformation’. This transformation involves modes of governance that reproduce established patterns of domination while introducing new configurations of socio-technical, institutional and discursive power (Redden, 2022). What makes the ‘digital welfare state’ particularly novel is its suggested ability to leverage digital technologies to automate processes, predict outcomes and rationalise state decision-making about the redistribution of state resources in ways that were not possible before, through the rapid analysis of large, complex datasets (often described as ‘big data’) (Iversen & Rehm, 2022). However, significant elements of the digital welfare state retain and, potentially, reclaim roots in government practices of the past. For example, there are notable historical continuities in how data infrastructures have been used to classify individuals and govern populations, often serving the interests of specific stakeholders and power structures.
Eugenics is one such historical thread that, as evidenced by the cases analysed below, connects novel technologies of digital governance with the very origins of the welfare state. By ‘novel technologies’ we refer to both conventional digital technologies and more recent automated decision-making (ADM) systems and forms of artificial intelligence (AI) that learn to make predictions based on past data with little human intervention. While in some ways novel, as illustrated below, these technologies are entrenched in well-established statistical methods of eugenic population management (MacKenzie, 1981). In fact, the history of data is closely intertwined with both the history of eugenics – its racist and ableist methods of population ‘management’ – and that of the capitalist welfare state (Eubanks, 2018).
Influential eugenicists of the 19th and 20th centuries, including British demographer Thomas Malthus and the so-called ‘father of the welfare state’, William Beveridge, were drawn to the prospect of social engineering, and placed great faith in the statistical approaches of eugenic race science to address societal challenges while also serving the interests of the nation-state (Ray, 1983). In his influential 1942 report, Beveridge extensively incorporated eugenic ideas, proposing that ‘good stock should be allowed to breed while bad stock would be ameliorated through state intervention’ (Shilliam, 2018). Over time, the moral categorisation of the poor in religious terms was replaced by ‘objective’ statistical methodologies (Shilliam, 2018). For example, Charles Booth’s extensive survey work in late 19th-century London categorised residents based on income and living conditions, using social and biological data to redefine understandings of poverty. Booth’s work, while ground-breaking, perpetuated distinctions between socially ‘useful’ parts of society, the ‘dangerous’ segments of the working class and societal ‘undesirables’, discouraging the latter from reproduction (Pierson & Leimgruber, 2021). These categorisations marked the emergence of a new form of state biopolitics, where statistics and scientific knowledge consolidated elite power with the expansion of industrial capitalism and the British Empire (Hacking, 1992).
Disabled people, the poor and those from the lower classes, along with Black and Indigenous people, and people of colour, were all classified as groups in need of eugenic population controls. Eugenics reinforced inequalities around class, disability, race, ethnicity and Indigeneity (Levine, 2017). Extending into the farthest corners of colonial empires, eugenic reproductive controls, including medical sterilisation, were proposed, particularly for women deemed disabled, mad or morally monstrous, and in some countries remain in place under the guise of ‘protection’ (see Thompson, 2017). The asylum, enforced segregation, child removal and, where possible, population elimination, were deemed a rational response to sustain the future of the colonies, the nation and empire (see Soldatic, 2015)
Adopting a historical eugenics perspective prompts us to question new technological developments in AI, which, like the eugenic techniques of the pre-digital era, use statistical methods to rationalise, individualise and biologise problems of social origin (McQuillan, 2022). We are not the first to recognise these historical continuities. Disability scholars in adjacent fields highlight the troubling relationship between advancements in medical science, specifically those related to the human genome project, and the resurgence and normalisation of eugenics embedded within social policies (see Shifrer & Frederick, 2019). Similarly, critical AI scholars, including Tetyana Krupiy (2020) and Dan McQuillan (2022), have raised concerns about the eugenic implications of AI's categorisation of certain groups as inferior, unworthy or undeserving of essential social support, especially in those welfare states where neoliberalism has been most influential (Agüera y Arcas et al., 2017; Eubanks, 2018). Yet, within the extensive literature exploring the uses and abuses of AI, ADM and big data, there has been little focus on disability's positioning within the matrix of digital statecraft.
In this article, we explore the significance of eugenics as a context for the emergence of data-driven, algorithmic modes of government decision-making across three social policy areas: child welfare, immigration, and disability out-of-work benefits. These areas, we argue, have been central to processes of neoliberal state transformation and experimentation in digital statecraft. We are particularly concerned with the ways digital welfare states mobilise eugenic frameworks of deservingness, physical/mental inferiority, and economic worth to differentiate and designate dis/abled populations as targets for state intervention. Our approach combines retrospective analysis with a contemporary focus. It involves examining the links between present-day digital and data-driven technologies, on the one hand, and the historical eugenic strategies and interventions deployed by social reformers of the pre-digital era, on the other. The case studies demonstrate that digital tools such as ADM and AI simultaneously operate to ‘discriminate in allocating support for life’ while marking out populations for ‘slow death’ (McQuillan, 2002, p. 85; Berlant, 2007).
The article contributes to a growing body of literature, including within this special issue, that historicises, contextualises, and critiques the harms of digital technologies in the context of social welfare, particularly with the marked cruelties of neoliberal policy practices of rationalisation (Dencik, Hintz et al., 2018; Dencik, Redden et al., 2019; Eubanks, 2018; Keddell, 2019; Redden, 2022). Its key claim is that the digital welfare state, though not explicitly eugenic, perpetuates eugenic approaches to the moral and economic valuation of life. Drawing on concepts from disability studies more specifically, the article emphasises the inherent linkages between modern data practices and the historical construction of ‘disability’ as a category related to economic valuations of individual and social worth (see Grover & Piggott, 2015; Soldatic & St Guillaume, 2022; van Toorn, 2024). Through our analysis, we provide valuable insights that contribute to a more nuanced understanding within critical data/algorithm studies of the complex interplay between historical ideologies and contemporary state data practices concerning groups that experience profound marginalisation.
Before we continue, it is necessary to acknowledge the relative simplicity of the automated systems featured in our three case studies. Some of these systems, including simple rule-based decision trees, statistical benchmarks, and points-based scoring algorithms, do not meet the typical criteria for artificial intelligence, as they do not exhibit autonomous learning capabilities typically associated with AI. However, they still play a significant role in decision-making processes, relying on predefined rules and algorithms rather than adaptive learning. The implementation of simple scoring systems for welfare eligibility and risk assessment shows how even basic data analysis procedures hold substantial power to shape people's lives. Despite their simplicity, these systems wield a profound influence on the distribution of society's resources and opportunities. Through the ranking and sorting of individual bodies, they determine access to welfare benefits, educational opportunities, employment prospects, and other crucial components of a flourishing life.
In the section that follows, we explore the historical origins of these early data systems, tracing their genealogy from their inception alongside the development of the field of statistics to their diverse applications within the realm of welfare state governance. We then proceed to examine our three case studies, offering insights into their unique features and implications, and the ways they exemplify broader trends in digital governance. The concluding discussion brings the threads together, emphasising the need for continued critical engagement with the digital welfare state's history of data utilisation and its imbrication with modern power relations rooted in eugenic views of racial and physical superiority.
Eugenics and statistics: ‘bred in the bone’
The application of mathematical techniques to social problems is not new to the world of welfare. As Wiggins and Jones (2023) note, ‘moral panics create new sciences’, and in the late 19th century Britain's elite were preoccupied with eugenic concerns that poverty, alcoholism, crime, and a decrease in the birth rate among the upper classes were threatening the stability of the British Empire (2023, p. 35). The resulting moral panic led to the creation of a statistical approach that offered not just the ability to identify societal problems but also the potential to solve them through an applied science of human improvement. Thus emerged the field of modern biometrics. At the centre of this new field were statisticians including Karl Pearson and Frances Galton – Charles Darwin's cousin – who introduced the idea of correlation and applied statistics to the study of heredity (MacKenzie, 1981).
Theirs was a ‘grand vision of statistical biology’, according to which biometrics held the potential to be applied in almost every field of study and ultimately serve as ‘the foundation for effective eugenics’ (Porter, 1986, p. 311). Biostatisticians measured heads, noses, heights, and gaits among other traits, in an attempt to demonstrate innate racial differences and advance theories of racial degeneracy. Their approach was reductionist. They viewed social and behavioural traits such as poverty or prostitution as genetically inherited – or ‘bred in the bone’ (Pearson, 1903, p. 207) – rather than socially conditioned. The core idea was that biological characteristics provide an impartial indication of an individual's fitness and value to society. Therefore, there was a requirement for scientific techniques that could quantify those attributes, and the field of biometrics emerged and flourished as a means to achieve these eugenic ends (Desrosieres, 1998).
In the early 20th century, eugenic approaches to social reform were widely embraced and dominant, not just in Britain but also in the United States and in settler colonial societies, including Australia and Canada (Mitchell & Snyder, 2003). Immigration and labour regimes in these colonial countries utilised a crude eugenic system of social stratification, sorting the undeserving classes from the genetically ‘superior’ migrants of value to the nation-state (Soldatic, 2015). Individuals considered ‘deficient’ in intellect, such as the ‘mentally ill’, ‘epileptics’ and the ‘feebleminded’, along with those judged to be deficient in character, including ‘criminals and the incorrigibly idle’, were among those excluded from the deserving category (Leonard, 2005, p. 208). Idleness was equated with ‘moral deficiency’, and people ‘incapable of producing their maintenance’ were likened to ‘parasites’ on the social body: the ‘industrial residuum’ (Webb & Webb, 1920 [1897], p. 785, quoted in Leonard, 2005, p. 208). Using intelligence tests as a ‘scientific’ method of determining feeble-mindedness enabled eugenicists to claim that their selection of targeted groups was not arbitrary: ‘persons from multiple marginalized groups, including those living in poverty, racial minorities, undesirable immigrant populations, and others could easily be pulled into the feeble-minded category’ (O’Brien & Bundy, 2009, p. 159). In other words, eugenic categories comprised individuals whose characteristics would today be considered cultural, socio-economic or psychological in nature (Ladd-Taylor, 2017).
Proponents of such eugenic ideas were of both the conservative classes and of the left. Prominent Liberals in Britain believed that restricting procreation among ‘problem’ populations was necessary to the maintenance of nation and empire, including political figures such as Churchill who was an early drafter of the UK Mental Deficiency Act 1913 (Paul, 1984; Spektorowski & Mizrachi, 2004). Continental Europe witnessed diverse forms of eugenic population management. Similar to Nazi ideology, Swedish communitarian and ‘productivist’ socialism saw a role for the welfare state in shielding society from ‘unproductive anti-socials’, resulting in Sweden's transformation into a ‘eugenic welfare state of the fittest’ (Lucassen, 2010, p. 277). This form of eugenics differed to some extent from the German approach, as in Sweden the grounds for exclusion were established on social criteria – citizens’ moral character, physical and metal ‘defects’, and respectable lifestyles, for instance – rather than racial ones (Lucassen, 2010). Lucassen notes that Swedish social democrats adopted a veneer of pragmatism, consistently favouring ‘purely rational technical analyses, even if those implied radical solutions’ (2010, p. 274). This appeal to reason was also reflected in the aspirations of colonial administrators in Australia and Canada, as a way to scientifically justify their exclusion and, at times, the expulsion of non-white migrants through the drawing together of arguments associating biological fitness with criminality (see Hill Collins, 2019). Indeed, a common element across these variants was a shared enthusiasm for ‘applied science’ as the foundation of a eugenic project of rational social engineering. The idea that eugenics was rooted in ‘reason’ or ‘rationality’ became more pronounced in the modern era, as eugenics came to be seen as a thoroughly scientific endeavour.
Economic as well as political priorities were influential in steering eugenic social policies. Alongside the interventionist variants already discussed, there was also a strain of eugenic thought emphasising the primacy of competition in social relations. In this view, genetic selection was consistent with individual competition as ‘the pitiless law of life and the mechanism of progress via elimination of the weakest’ (Laval & Dardot, 2014, p. 34). Francis Galton, for example, was against any form of state intervention that would interfere with the so-called ‘natural order’ of society (O’Brien & Bundy, 2009). This perspective aligned well with the economic objectives of advancing Western capitalism, for it implied two things: First, that for economies to thrive, competition must be allowed to flourish unimpeded, that is without the helping hand of the state. Second, eugenics also served the interests of capitalism by aiming to produce a workforce that was suitable for industrial labour (MacKenzie, 1981), that is, bodies and minds that were not just physically and mentally ‘fit’, but also the most capable of productive work (Hill Collins, 2019). Eugenics thus became linked with the pursuit of national efficiency, and biometrics was repurposed to quantify people's usefulness, worth and deservingness within a capitalist framework of value (Norton, 1978). While socialist eugenicists were committed to the idea of a planned society, their critique of capitalism, according to MacKenzie, was ‘never generalised to those deeper features – the hierarchical division of labour’ (1981, p. 36). Social democrats, as noted above, embraced eugenics and its biometric tools as a means to engineer a population of productive, able-bodied (white) worker citizens, capable of maximising economic output and facilitating the accumulation of wealth under capitalism (Spektorowski & Mizrachi, 2004).
The spectre of eugenics in the postmodern neoliberal welfare state
O’Brien and colleagues contend that historical eugenics is still relevant for any social policies and programs that have the effect of limiting ‘procreative capacity’ (2009, p. 153). Disability scholars have also shown that eugenic anxieties regarding women's – especially poor black disabled women's – reproductive capacities, are still present today. This is evident in patterns of coerced sterilisation of women with disabilities worldwide and the disproportionately high rates of child removal among BIPOC (Black, Indigenous and people of colour) populations, notably in settler colonial states like Australia, Canada, and New Zealand. Nonetheless, the legacy of eugenics extends beyond the realm of reproductive injustice (Saura, 2020). The term ‘soft eugenics’ refers to the systemic and subtle ways in which eugenic logics are reproduced without explicit measures like forced sterilisation (Meloni, 2016). It encompasses policies and practices that, whether by design or default, create social conditions inhospitable to human flourishing for the so-called ‘unfit or undeserving’. Michelle Murphy, in The Economisation of Life (2017) echoes this idea, suggesting that eugenic population control went beyond regulating birth rates. It was a project fundamentally concerned with reshaping nation-state governance, giving priority to national economies, and managing life, including the administration of welfare, with regard to life's capacity to contribute to the gross domestic product of the nation. For Murphy, economic valuations of life are infrastructural – meaning they are incorporated into the very bureaucracies, technologies, buildings, legal standards, material supports and governing practices that organise all aspects of life.
In his book, Resisting AI, Dan McQuillan (2022) argues that AI has a political resonance with soft eugenic approaches to the valuation of life in/by modern welfare states. For McQuillan, AI exhibits eugenic traits both in its underlying logic, as well as in its technical operations. In terms of its political-economic rationale, AI has emerged at a time of crisis for the welfare state, wherein the pressure to improve efficiency and reduce costs is high, making the use of AI and automation appealing to policymakers and administrators. ‘AI's promise of large-scale efficiencies’, McQuillan notes, ‘chimes with the way historical eugenicists ‘portrayed themselves as efficiency experts, helping to save society millions of dollars by sterilizing defectives so that the state would not have to care for their offspring’ (Allen, 2001, quoted in McQuillan, 2022, pp. 91–2). AI, in other words, operates within the same moral framework as states that, in the interests of fiscal austerity, try to demarcate and assist only the ‘deserving’ while abandoning or excluding those considered ‘undeserving’ or without rights. But it does so in the name of rationalism. Governments’ justification for incorporating more technology, particularly those relying on algorithms and AI, heavily relies on arguments for standardisation and rationalisation, particularly the rationalisation of state bureaucracies under neoliberal reconfigurations such as austerity (Peeters & Schuilenburg, 2018). It is this stamp of rationality that allows the ideological aspects of these technologies to be depicted as ‘objective facts’. This is despite the fact that AI, inherently embedded in the conditions of its creation, operates within the confines of the data it is given, and therefore lacks capacity to transcend broader societal prejudices on which its computation relies.
At a practical level, AI provides the technical means to operationalise eugenic frameworks of deservingness at scale. AI's core function is to analyse complex data and reduce it to a set of simple decision-making parameters, which can then be used to automate tasks, including administrative processes. This makes it highly compatible with eugenic logics of statistical categorisation, which, as discussed above, affect the lives of specific populations in particularly violent ways (Eubanks, 2018; McQuillan, 2022). Statistical categorisation is the precondition for state practices of what Dean Spade calls ‘administrative violence’ – that is, the way in which legal and administrative classification systems ‘distribute security and vulnerability at the population level and sort the population into those whose lives are cultivated and those who are abandoned, imprisoned, or extinguished’ (2015, p. 73). At stake here are the very conditions that make the continuation of life possible for all people, but especially those who are marginalised and stigmatised. These conditions, encompassing access to basic healthcare, mobility, personal security, and a decent standard of living, are key determinants in the distribution of resources and life opportunities, largely shaped by access to state welfare. They critically shape the cultivation of health and wellbeing, without which a flourishing life is not possible. AI's role in sustaining such violence is to enhance the categorisation and social ranking of individuals based on moral frameworks of value that ‘subdivide resources down to the level of the body, identifying some as worthy and others as threats or drains’ (McQuillan, 2022, p. 85). The potential consequences are thus not merely academic or theoretical but can manifest as life-threatening outcomes for those subjected to these technologies.
A disturbing mix: predictive analytics and eugenic theories of parental fitness
The spectre of administrative violence is perhaps nowhere more apparent than in the area of child protection. Since the 1980s, social work has undergone a process of rationalisation and automation driven in part by the use of structured decision support tools (Redden, 2020). These tools assist social workers in determining the need for further investigation and intervention in cases involving vulnerable children. Early tools included structured risk assessments through which data were collected to identify key factors associated with recurring child protection involvement (Gillingham, 2019). Children who displayed these particular characteristics were considered higher risk and were flagged for greater attention from child services. Over the past decade, advancements in computing technology speed, and the capacity to extract and link data from various service systems, have allowed for the creation of predictive risk models incorporating very large datasets and hundreds of variables (Jenkins, 2021). These predictive tools have been widely tested and, in some cases, adopted in countries such as the USA, the UK, Aotearoa New Zealand, Australia, the Netherlands, Norway, and Denmark (Jørgensen et al., 2022). The automation of decision-making processes using these tools has been driven by financial pressures on social work agencies to meet rising demand amid growing social inequality, austerity programs, and welfare cutbacks (Keddell, 2019).
Many predictive tools are developed by private companies under proprietary arrangements, which means that they are not transparent about their operational mechanisms. There are some exceptions, however, such as the Allegheny Family Safety Tool (AFST) used by child protection services in Allegheny County, North Carolina, which was created by academics who published research on the tool’s development, providing information on the selection and weighting of data points used (Vaithianathan et al., 2013, 2019). Criticism of these tools, and the AFST in particular, has come from both media reports and academic analyses, with concerns raised about potential biases in the data used to train the risk models, and the human rights implications of discriminatory targeting of racial and cultural minorities (American Civil Liberties Union, 2023; Ho & Burke, 2023; Keddell, 2019; Redden, 2020; Samant et al., 2021). Some research has questioned the accuracy of these models, highlighting that many variables utilised in these tools might have little or no actual correlation with child maltreatment, while only a few variables might be highly correlated (Gillingham, 2019). The AFST, for example, ‘learns’ from data about families that are reliant on social programs and are therefore more visible in administrative records. The disproportionate representation of minoritised groups in the training data produces a biased algorithm that classifies certain families as high-risk. This leads to disproportionate levels of state scrutiny and surveillance, and ultimately results in the violation of rights and the perpetuation of racial discrimination within the child welfare system (Eubanks, 2018; Krakouer et al., 2021).
Predictive tools are not only influenced by racial bias. They also demarcate categories of people deemed unfit to parent on the grounds of disability, echoing the eugenic practice of punishing those who fail to conform to statistical constructions of able-bodied whiteness. The AFST provides a compelling example. It uses disability-related data points, such as the number of ‘behavioral health events’ and the percentage of time on medical assistance, as proxies for disability status, which increase a person’s risk score (Allegheny County Department of Human Services, 2019, p. 37). Analysis by the American Civil Liberties Union (2023) found that disability status, as inferred through data related to the receipt of disability benefits and health services, increased a person's risk score by up to 3 (out of 20) points. This perpetuates the harmful notion that the mere presence of a disability indicates a higher likelihood of child abuse or neglect. In a recent case, a couple had their one-year-old child removed from their care based on a high-risk classification by the AFST algorithm, despite no evidence of actual maltreatment (Ho & Burke, 2023). It was later revealed that the mother had attention-deficit hyperactivity disorder that affected her memory, while the father had a comprehension disorder and nerve damage resulting from a stroke.
This case serves as a potent reminder of the legacy of eugenic logics in modern digital administrative tools. The use of these tools in social work highlights a concerning pattern of relying on biometric markers of physical and mental ‘fitness’ to gauge parental capacity. While eugenics is widely condemned today, social welfare and health authorities still adhere to systems that effectively hinder women with disabilities from becoming mothers or, at the very least, from raising their children. In the United States, for example, child removal rates are between 40% and 80% higher among parents with disability (Leary, 2018). Countries with a history of settler colonialism, including the US, Canada, and Australia, continue to grapple with the legacy of forcibly removing indigenous children from their families as part of genocidal, eugenic, and assimilationist agendas (Krakouer et al., 2021). On the face of it, risk prediction tools seem like an appealing solution for child protection systems, as they promise to enhance decision-making efficiency in an increasingly resource-constrained environments (Jørgensen et al., 2022). However, the reality falls short of this ideal. In practice, these methods have the potential to magnify the eugenic impulse that sorts individuals according to hierarchies of value, providing a scientific basis for separating children from families marked by differences of culture, race, ethnicity, and disability. What is particularly troubling is that the moralising force of algorithmic decision-making is disguised as impartial and objective analysis of data, making it all the more difficult to discern and contest.
Biometrics, eugenics, and the shaping of Australia's immigration system
Historically, eugenics played a central role in the strategies of nation-states aiming to exert control over populations deemed of national interest. These strategies encompassed practices of genetic engineering, but were also reflected in stringent immigration controls, which aimed to ensure that only ‘desirable’ immigrants were granted entry to the country. Simultaneously, these controls utilised eugenic racialised and ableist moral frameworks to justify the expulsion of others (see Levy & Peart, 2004). Biometrics in the context of border regimes, as Magnet (2011, p. 21) argues, is the ‘science of using biological information for the purposes of identification and verification’ (Magnet, 2011, p. 21). These indicators can pertain to an individual’s physical traits, including fingerprints, DNA, and facial, iris, or retina recognition, as well as their behavioural patterns, such as gait recognition and so-called emotion detection. Recent research has revealed how biometric technologies employed in airports and the US welfare system are not neutral with regard to race, gender, class, or disability, but rather serve as tools that enable the manipulation of marginalised bodies through coercive power, resulting in their exclusion from legal protections (Kruger et al., 2008; Madianou, 2019; Magnet, 2011). Biometric technologies have been shown to contribute to and perpetuate the state of being outside of the law, or ‘bare life’, as defined by Agamben, by enabling the suspension of legal rights and access to basic benefits (Kruger et al., 2008).
Border management systems have historically relied on decision-making strategies that incorporated elements of eugenic biometrics, data analysis, and surveillance. Early border technologies to screen out newly arrived immigrants drew upon eugenic assumptions of physical and intellectual fitness, quantified through intelligence quotient (IQ) tests these technologies assessed new arrivals and their value to the national economy based on ableist notions of capitalist productivity and economic capacity (see Levy & Peart, 2004). The use of AI in contemporary border management systems may appear qualitatively different from the crude eugenic methods of earlier years. However, the fundamental principles of these systems, which involve identifying and excluding individuals considered ‘exceptional’, ‘abnormal’, or potentially ‘criminal’ through biometric identification, remain at the core of their design and purpose.
Despite some reservations, governments are heavily investing in a comprehensive array of technologies to surveil, manage, and assess individuals with bodies and minds considered exceptional, risky and/or threatening. The Australian government, for instance, has allocated significant resources towards developing AI and ADM strategies for the containment of immigrant populations, particularly during the early stages of the immigration process, with the aim of preventing their arrival (see Department of Home Affairs, 2017). As part of the Australian government's Digital Business Plan, the Department of Home Affairs has spent several years exploring the privatisation of immigration platforms to facilitate automated visa processing (https://www.services-exports.gov.au/node/136). This approach includes capturing biometric data, especially related to disability and health, to establish profiles of statistically anomalous individuals prior to their arrival. The government envisaged a platform including the use of ‘algorithmic assessments to perform subjective decision-making functions to assess genuineness, fraud, character concerns, and health assessment – with security assessment remaining with [the Department of Home Affairs]’ (Law Council of Australia, 2022, p. 14). The outsourcing of this process to a private firm was ultimately rejected by a Senate Committee due to data security and equity concerns regarding vulnerable populations. Nevertheless, under the current Labor government, investments in AI and ADM technologies and platforms for visa processing are still being explored, with the government seeking assistance from the private sector. In April 2023, the Department of Home Affairs released a report titled A Migration System for a More Secure and Prosperous Australia, which outlines the redesign of core components of Australia's migration system. Notably, biometric platforms integrated into various visa technologies will be adapted to enable the early identification of healthy, non-disabled migrants that can be ‘prioritised … to enhance our economic prosperity and security’ (Department of Home Affairs, 2023, p. 2).
For disabled people in particular, the biometrification of border regimes (re)establishes a hierarchy of immigrant bodies, perpetuating the eugenic belief that a person’s worth to the nation is determined by their biological fitness and work capacity. In the next section we revisit this theme, exploring how work capacity plays a crucial role in determining who is considered deserving of disability welfare benefits.
The scored body: quantifying disability for welfare eligibility
A growing body of research indicates an increasing adoption of statistical methods and computational tools to automate processes of determining who does not have the capacity to work and is therefore, deserving of state out-of-work welfare payments (Soldatic, 2019; van Toorn, 2024) and other forms of disability support (van Toorn & Leach Scully, 2023). In contrast to the two previous ADM systems discussed, sorting between the deserving and undeserving has been critical to the formation of the welfare state, dating back to the Poor Laws of 1834 (see Stone, 1986). In the context of the capitalist welfare system, eligibility criteria have shifted from focusing on severe impairment, incapacity, and chronic health conditions to prioritising an individual's ability to work as the key determinant. The use of statistical classification for processes of social sorting has been pivotal to the rationing of state resources among those who are deemed as ‘able to work’, those who are ‘merely unwilling’ and those who do not have the capacity.
With the emergence and now normalisation of welfare-to-work policies across OECD (Organisation for Economic Co-operation and Development) countries since the late 1990s, an increasing number of states have developed extensive testing regimes to manage the threshold of eligibility as it has become more stringent over time (see Soldatic, 2019). As part of this shift, and particularly after the 2007–8 global financial crisis, global economic institutions, including the OECD and World Bank, began to push for the inclusion of disabled people into the workforce to bring down the costs of disability unemployment benefits. Utilising the disability movement's language of work capability and the right to work as enshrined in 2006 Convention on the Rights of Persons with Disabilities, governments began to advocate for ‘improved’ functional assessments of disability to enable greater targeting of welfare-to-work interventions and to remove disabled people from benefits, pushing them into the world of work with the aim to bringing down the cost of welfare. This resulted in assessment structures that often violently intervened in an individual's life (Norberg, 2022) while simultaneously removing all discretion of the medical professionals who had provided long-term support and medical care, and thus had a deep understanding of an individual's capacity to work, the types of potential work they could do, the length of time they could participate each day, and so forth. As many disability scholars have noted, the rationalisation of disability out-of-work benefits has shifted many people with disability off welfare disability pensions onto lower general unemployment benefits (Soldatic, 2019). In Australia, as a consequence, efforts have been made to improve the precision of assessment interventions to move disabled people off welfare and into employment, which often consists of unstable and contingent work (Soldatić, 2019). This was in part achieved through the implementation of algorithmic, points-based assessment tools.
These tools are designed to gather data with which to analyse the capacity of individuals with disabilities to work. They have evolved over time, incorporating stricter criteria in their assessments. Individuals seeking access to disability unemployment benefits in Australia now have to present ‘raw data’ from extensive medical reports, often spanning extended time periods. For example, this could include full medical reports from all of their treating doctors, specialists and health professionals, such as registered psychologists or local nurse practitioners. The scoring tool processes individuals’ medical information to determine whether they meet the eligibility criteria, assigning points from the medical data based on a scale to quantify their work capacity. To be eligible, an individual must score a minimum of 20 points within a single area or chronic condition, with no consideration for co-morbidities or evolving biological effects of complex health conditions or impairments (Department of Social Services, 2023). Conditions have to be fully diagnosed, reasonably treated, and stabilised, and fluctuations in a condition are not taken into account since they cannot be precisely positioned on the scale. Incorporating these rules into the algorithmic scoring process essentially raises the threshold for qualification, making it more difficult for individuals to secure access to benefits.
These technologies, developed before the era of big data, seem basic in comparison to the biometric systems used in immigration or the data-driven risk modelling employed in child protection cases. However, it is important not to underestimate the significance of these tools, despite their apparent simplicity. For it was due to their simplicity, particularly regarding the statistical benchmarks used, that they were able to be applied across a diverse range of disabling conditions and socio-technical environments, from online platforms to face-to-face interactions with service providers delivering welfare-to-work programs. This ADM system aimed to quantitatively assess the severity, permanence, and stability of an individual’s specific impairment, leveraging diverse medical data points in a process of administrative ‘biocertification’ (Adler-Bolton & Vierkant, 2022).
Despite the endorsement of international treaties by nation-states to uphold disability rights and social protections, statistical scoring and ranking methods are specifically designed to restrict the scope of the disability category. These methods perpetuate racialised and ableist inequalities bound up in eugenic logics of physical and mental fitness, and value within the national polity. These rationalities create a self-perpetuating cycle of (in)validation, as they gain additional reinforcement from the supposedly unbiased measures of statistical analysis that scale, rank and score the most marginalised populations.
Concluding discussion: scoring and ranking lives of value
This article contributes to and expands the existing body of literature on the significant rise of data analytics and ADM systems in welfare, focusing on how automated governance, while not explicitly eugenic, perpetuates similar discriminatory practices, unequal power dynamics, and exclusionary outcomes. It does so by exploring how the normalisation of data practices within these systems creates conditions inhospitable to disabled people and their lifeworlds. Through the analysis of three case studies, the article demonstrated notable parallels between past eugenic ideologies and socio-technical instruments, on the one hand, and contemporary ADM and data practices, on the other. Automated systems, we argued, perpetuate the patterns of (de)valuation and biometrification associated with the eugenics era, albeit concealed under the guise of technical neutrality. Among those who are most profoundly affected by this form of administrative violence are disabled people, BIPOC communities and people living in poverty. Hence, a significant contribution of this article lies in its foregrounding of the racist and dis/ablist stratifications (re)produced by modern digital state infrastructures.
The echoes of eugenics reverberate within the three domains of digital welfare examined in this article: child removal, immigration, and disability benefits. Drawing from our analysis, we propose that welfare states, particularly (neo)liberal settler colonial welfare states such as Australia, the United States of America, and Canada, have redirected their attention from the controversial practice of biologically engineering future generations to alternative yet equally concerning forms of social and biopolitical intervention. These interventions operate within an epistemic framework that transforms data into statistical representations of various aspects of the world, including individuals, environments, and human bodies. Rather than considering individual perspectives or circumstances, inferences are drawn based on a predefined set of biometric data points. These inferences are then taken as a measure of an individual's value to society, their degree of deservingness, and the particular roles they are deemed capable of fulfilling within a capitalist wage-labour system. Moreover, these inferences serve as the foundation for various forms of state intervention, shaping policies and practices that impact individuals’ access to resources, support, and life opportunities. Data-based practices of group differentiation, we argue, enable a form of social engineering which, for some people, results in a process of ‘slow death’, that is, ‘the physical wearing out of a population and the deterioration of people in that population’ (Goodley et al., 2014, p. 981). To the extent that ADM enables the gradual decline and abandonment of groups considered undesirable or unworthy of social protection, it can be characterised as a project of eugenic world building.
Footnotes
Acknowledgement
We are greatly appreciative of the insightful comments provided by two anonymous reviewers, though we bear responsibility for any errors.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/ or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the ARC Centre of Excellence for Automated Decision-Making and Society (grant number CE200100005).
Author biographies
Georgia van Toorn, Lecturer, School of Social Sciences, Faculty of Arts, Design and Architecture, University of New South Wales. She is a political sociologist specialising in international social policy, politics, disablement, and social justice. Her research explores global transformations in welfare governance, with a particular focus on processes of marketisation, the commodification of social care, and the growing impact of data analytics and algorithmic decision-making in the public sector.
Karen Soldatić, Canada Excellence Research Chair in Health Equity and Community Wellbeing; Professor, School of Disability Studies. She is a leading international scholar of disability, marginality and global inequality. Her research uses a broad lens to explore how social, cultural and political factors influence health and community well-being.
