Abstract
While data is increasingly proffered as the resource that unlocks the promises of the digitalized world, for underprivileged individuals and communities, instead of fulfilled promises, datafication means additional marginalization. Examining these forms of marginalization, this article considers how technological advancements come with ability expectations, and highlights the exclusion and discrimination of disadvantaged segments of the population that result from failing to meet digital ability expectations and reach prescribed data norms. Drawing from critical disability scholarship, we introduce the notions of data ableism and data disablism, which encapsulate privileged ability expectations pertaining to data production and the resulting forms of exclusion that are prevalent in automated societies. Underlining the intersectional nature of data ableism, we discern its two main mechanisms, namely data (in)visibility and data (un)desirability, and document the role of free market ideology in producing and upholding data ableism.
Keywords
Introduction
The digital transformation of societies has resulted in the generation of unprecedented amounts of data: a societal development filled with great promises and expectations to the degree that the digital age is now in many ways the age of Big Data. In a volatile global economic environment, data has been instrumental in countervailing stagnant production and declining rates of profits and thus has provided a boost to capitalism (Srnicek 2017). Big Data enthusiasts celebrate the opportunities that data offers for the transformation of economic and social order (Lohr 2015; Mayer-Schönberger and Cukier 2013). Critics, on the other hand, caution that Big Data enables intervention and control of individuals and groups, bestowing power to those who have access and the means to process it (Andrejevic and Gates 2014). From this critical vantage point, datafication is based on exploitative relations between the users of digital devices, the producers of data, and the organizations that turn data into revenue streams.
It is suggested that these asymmetries engender fundamental shifts in capitalist relations: with the advent of data-based capitalism, the primary site of contestation moves beyond the ownership of the means of production to encompass the ownership or control of information (Wark 2019). Fundamentally, new data power structures widen the disparities between the “have nots” who generate data and the “haves” who appropriate and use that data (Hintz et al. 2017). The increase of data-based decision-making in various private and public organizations and social settings—including healthcare, policing, markets, labor relations, education and social services—gives rise to issues concerning people’s privacy, discrimination, and participation. Andrejevic (2014) suggests that the result is a new digital divide. Those left on the wrong side of this Big Data divide are “separated from their data and excluded from the process of putting it to use” (Andrejevic 2014, 1685). For individuals and groups, this further exacerbates problems of inclusion and exclusion: people experience data-based discrimination because of social, cultural, economic, physical or other factors that limit their ability to engage with digital technologies (Holtzhausen 2016).
Taking these darker sides of digitalization and datafication as our point of departure, we direct attention to the emergence of market-based data norms and standards that privilege and reward some while marginalizing others. We argue that the exclusion of disadvantaged segments of the population—those who fail to reach prescribed data standards and norms—is an inherent aspect of data capitalism. Engaging with critical disability scholarship, we introduce the notion of data ableism, which encapsulates privileged digital ability expectations pertaining to data production. Its flipside, data disablism, refers to the resulting forms of exclusion that are prevalent in automated societies. Data ableism, then, draws parallels to ability expectations, or what are considered as normal and necessary abilities to function in societies. Data disablism relates to the disabling marginalization resulting from the failure to meet such ability standards. These two concepts serve to highlight that data absences and data-related perceived weaknesses hinder people from reaching their potential as data subjects, and, in the context of increasing datafication, as members of the society. Further, the concept of disability has historically legitimized the unequal treatment of people by labeling them as disabled (Baynton 2001). Data-based automated systems have a similar function: they exclude individuals and communities, creating obscure mechanisms of inequality that remain hidden within opaque decision-making systems and processes.
Admittedly, introducing data ableism and disablism continues a certain trend, in which socially and politically charged concepts are employed in the analysis of data capitalism. Scholars have previously argued that similarly to the way capitalism has relied on aggressive colonization, data capitalism is premised on “data colonialism” by means of data dispossession and extraction of value from people’s lives (Couldry and Mejias 2019; Thatcher et al. 2016). Others have considered that data capitalism engenders new forms of “digital slavery” in the sense that data practices result in a deliberate encroachment of individual agency (Chisnall 2020), and that algorithmic exclusion and discrimination constitute a form of “data violence” that occurs as a result of choices leading to harmful and sometimes even fatal consequences (Hoffmann 2018). As Chisnall (2020) points out, such powerful terms should not be applied casually. In the case of concepts related to disability, critical disability scholarship underlines that disability metaphors, while not inherently wrong, may be imbued with oppressive assumptions of disability (Ben-Moshe 2006), whereas insensitive analogies to disability risks obscuring the structural oppression that marginalizes people with disabilities (May and Ferri 2005). Acknowledging these concerns, we invoke the notions of ableism and disablism not as metaphors, but to point precisely to power structures that, through new forms of digital ability expectations and data norms, exclude and marginalize individuals and communities. By doing so, we respond to explicit calls to treat the cultural concept of ableism as a gift from the field of disability studies to the academic community. Wolbring (2012) suggests that the human rights progress achieved by the disability rights movement, and the knowledge about the socially constructed nature of disability gained by disability studies, could benefit other communities and academic fields. Thus, he welcomes the extension of the concept beyond the sphere of disability, as it will enable scholars to explore and scrutinize various societal ability expectations and norms that disenfranchise segments of the population. Taking this cue, we propose the terms “data ableism” and “data disablism” to broaden the cultural understanding of ableism and disablism to encompass data-related ability expectations. With these concepts, our aim is to direct attention to emerging societal divides in automated societies and highlight the systemic data-driven discrimination that is faced by neglected and marginalized sectors of society.
Our argument will proceed as follows. We begin by drawing on critical disability scholarship, particularly on ableism and disablism, and propose extending these concepts to analyze how dominant data politics cement biases and deepen societal divisions. This is followed by a discussion on inequalities and discrimination processes fostered by datafication, and a framing of this discussion in terms of digital ability expectations under the notions of data ableism and data disablism. Here, we will examine two main mechanisms—data (in)visibility and data (un)desirability—discussing how these mechanisms intersect with well-known forms of societal discrimination. We then expand on how data ableism and disablism conform to prevailing free market ideologies. To conclude, we suggest that the conceptual tools of ableism and disablism not only shed light on previously under-examined forms of digital social exclusion, but also point toward more inclusive and egalitarian data politics and data futures.
Ableism and Disablism
The notions of ableism and disablism are central to the field of disability studies. Ableism refers to preferences for abilities that are considered normal and the rights and benefits that come along with them. Campbell (2001, 44) defines ableism as “a network of beliefs, processes and practices that produce a particular kind of self and body (the corporeal standard) that is projected as the perfect, species-typical and therefore essential and fully human.” She further highlights two main tenets of ableism: on the one hand, the ideal of normality deeply rooted in ableist discourse; and on the other, a divide between what is considered and accepted as normal and what is not (Campbell 2009). Disablism refers to the latter flipside of ableism: negative treatment of persons who are perceived as lacking with regard to said ability preferences. It is defined as “a set of assumptions (conscious or unconscious) and practices that promote the differential or unequal treatment of people because of actual or presumed disabilities” (Campbell 2009, 4).
These definitions already illustrate that ableism and disablism are not just related to physical or mental capabilities. Instead, ableism is associated with perceptions that certain abilities are essential (Hutcheon and Wolbring 2012); we can therefore consider ableism as a set of biases against people who are considered not to possess these essential abilities. Generally, ableism promotes abilities that are associated with specific values, such as productivity and competitiveness, which are espoused by certain social groups and supported by power structures, over values that are not considered as important (Kumar et al. 2012; Wolbring 2008). In line with this observation, Goodley (2014, xi) underlines the socio-political underpinnings of disablism, which he understands as “the oppressive practices of contemporary society that threaten to exclude, eradicate and neutralize those individuals, bodies, minds and community practices that fail to fit the capitalist imperative.” Finally, Wendell (1989) argues that similarly to the way patriarchal societies advance and propagate male-dominated norms and values while women face discrimination and exclusion, ableist societies discriminate against people with disabilities and minimize their ability to participate in social life. Instead of mere lack of capabilities, then, ableism and disablism are closely linked to structural discrimination of individuals and groups, and to the reproduction of unequal social relations.
Understanding and uncovering the social dimensions of disability lies at the center of critical disability scholarship, which sees disability not as a biological given but as socially constructed: the disabled are treated as “the other” and are not given the opportunity to integrate into society (Wendell 1989). This conceptualization has been referred to as the social model of disability, focusing not on physical or cognitive impairments but instead on the societal barriers that hinder disabled people from fully participating in society (Oliver 2013). Impairments, then, are not the same thing as disabilities; disability is a construct, a function of the environment, and if the physical and social organization of the world were tailored for impairments, disability might effectively disappear (Burr 2015). Cherney (2011) further argues that the logic that associates ability with the physical body offers a rationale for the discrimination and marginalization of disabled people, while obfuscating the fact that the physical and social organization of life augments the pervasiveness of disablism in societies. Focusing on political economy and class relations, Russell’s (2019) work brings to light capitalism’s systematic oppression and marginalization of people with impairments by excluding them from the labor process. Thus, on the one hand there is the traditional medicalized understanding of disability, which focuses on detecting and fixing physical or mental impairments. On the other hand stands a socio-cultural perception of disablism that concentrates on the social and cultural construction of the disabled other as a marginalized and discriminated member of the society (Wolbring 2008).
The socio-cultural approach to disability also highlights its intersectional dimensions. Many scholars have underlined that disability should not be studied and examined independently from other social and cultural categories, highlighting the need to examine the intersection of disability with other categories in order to paint a complete picture of the social processes that develop around disability (e.g., Annamma et al. 2018; Campbell 2009; Ferri and Connor 2014; Gillborn 2015; Goethals et al. 2015). Goethals et al. (2015), for example, examine how disability, in connection to categories such as gender, religion, economic and social status, and family background, builds up a complex nexus of social relations that privilege some and oppress others. Similarly, Ferri and Connor (2014) oppose a single-axis analysis of discrimination, examining how the intersection of social class, race, and disability results in additional disadvantages for students who already face many barriers in education. Indeed, disability scholars have turned to feminist (Garland-Thomson 2002; Wendell 1989), queer (Baril and Trevenen 2014), and critical race studies (Annamma et al. 2018) to explore and theorize the disempowerment of disabled people by social structures and to develop theories of resistance and empowerment.
Toward an Understanding of Data Ableism and Data Disablism
Digital technologies offer immense potential to improve the lives of people, including people with disabilities, but they can likewise exacerbate existing inequalities. For instance, disabled people often face significant access and usage barriers that limit their life opportunities, as digital technologies are predominantly designed to cater for the needs of able-bodied consumers (Jaeger 2012). Trewin (2018) suggests that there are indications of bias against disabled people in machine-learning-based systems, but currently the issue has not been examined thoroughly. Nevertheless, as digital technologies become the preferred or even the only means of accessing government services and welfare programs, people with disabilities face insurmountable obstacles when their needs are not taken into consideration (Watling 2011). Even diversity initiatives that ostensibly aim to break down the labor market barriers for people with disabilities can end up setting up novel barriers and engendering additional forms of discrimination (Holmqvist et al. 2013; Kumar et al. 2012).
Disability scholars suggest that besides exacerbating existing disadvantages, technology may also engender new forms of ableism in the future, creating yet more divides and deepening inequalities. For example, the culture of human enhancement, presupposing that everyone is lacking and therefore in need of fixing, engenders new ability expectations and therefore new social forms of disablism toward “non-enhanced” persons (Wolbring 2009). In such a “transhumanized” version of ableism, improving the human body and its functioning beyond species-typical boundaries might be perceived as essential (Wolbring 2008). Such enhancement technologies may bring about new societal divides as the “techno-poor disabled,” unable or unwilling to enhance their abilities, may face discrimination (Wolbring 2008).
Speculations about future technological developments offer a view into how the emergence and adoption of new technologies can give rise to new forms of ableism and disablism. However, we do not have to speculate: as indicated above, datafication offers great promises but its societal consequences are existing realities that can also constitute a dystopian present for many. Recent studies highlight the rapidly evolving nature of the digital divide and point out that scholarship on digital inequalities needs to encompass emergent, data-related inequalities (see Lutz 2019; McCarthy 2016). Departing from the social understanding of ableism as ability expectations, and disablism as discrimination resulting from the failure to meet them, we broaden below the understanding of social ability preferences and the associated structures of oppression, so that they encompass expectations related to data production and forms of data-based discrimination. However, two definitions are in order before we proceed.
First, we draw parallels between bodily and cognitive standards that act as benchmarks for expected normality, and the development of new ability expectations that relate to the production of data. We refer to data ableism as the data politics, processes and practices that, by privileging certain data-related abilities, favor specific forms of digital engagement and engender a particular kind of desired data subject.
Second, we draw parallels between the way people are marginalized because of perceived bodily or cognitive disabilities, and the way that data absences and perceived data-related deficiencies hinder people from reaching their potential as data subjects and members of the increasingly datafied society. Accordingly, we conceptualize data disablism as the data politics, processes and practices that exclude individuals who fail to fit data-related ability expectations, deepening existing social inequalities and creating new ones.
The normative expectations of data-related ability that underlie data ableism and its flipside, data disablism, are closely connected with assumptions about market dynamics that ostensibly regulate the digital economy and lead to the maximization of individual and societal benefit. Data protection regulation, for instance, has placed individual choice at the heart of data relations (e.g., Solove 2013): a policy choice that has been considered one enabler of data capitalism (Coll 2014). Accordingly, technology companies employ legal and rhetorical devices to frame data relations in terms of rational and mutually beneficial exchange (e.g., Fourcade and Kluttz 2020; Zuboff 2015). Here, the ability expectation, we argue, is that individuals have the capabilities and competence to act as rational market participants in data relations. As market participants, individuals are expected to produce data that the market values, to be capable of making informed decisions about disclosing data, and to be in a position to accept or decline market offerings based on cost-benefit calculations. Such ability expectations may be interpreted in terms of the market as the primary source of human advancement and the arbiter of success, underlining subjectivities such as self-management, self-responsibilization, choice, and optimization. Data relations are yet one domain to be rationally managed for the fulfilment of individual need and ambition. These ability expectations, we argue, constitute a standard against which the (dis)enabling outcomes of datafication may be assessed.
In what follows, we illustrate how data ableism and disablism manifest by analyzing two intersecting normative mechanisms: data (in)visibility and data (un)desirability. Data (in)visibility refers to the ability to produce data that render people visible to the system or conversely the ability to hide from it, while data (un)desirability relates to the ability to produce desired data that are deemed valuable and lead to beneficial outcomes. By dissecting the two mechanisms, we will explicate how datafication functions as a normalizing process that punishes those who deviate from the standard of data normalcy and discuss how this process intersects with data-related forms of societal inequality and stratification already established in the literature.
Digital Ability Expectations and Data (In)visibility
In the first instance, data ableism enters as expectations about visibility and invisibility by means of data. The complex, intertwined socio-technical systems that abstract individuals into data flows lead to new visibilities, engendering the “disappearance of disappearance” (Haggerty and Ericson 2000), as the omnipresence of data production makes it difficult to escape its reach. The ubiquitous data capture processes have resulted in what Dencik (2018) calls “surveillance realism”: a reluctant acceptance and normalization of intrusive surveillance processes as the only viable reality. While consent for the collection of personal data is regularly traded in exchange not only for convenience but also for access to vital services, data visibility becomes the expected norm. Personalization, customization and other forms of tailoring suggest that being visible is also the individually optimal choice. In a data economy driven by the logic of capital accumulation, deviation from the normalcy of data visibility may result in exclusion and marginalization: diminished market and labor opportunities, exclusion from peer groups and social networks, inability to access vital services, and other forms of being left in the margins of the datafied society. Possibilities to avoid data visibility can be limited to actions that are essentially marginalizing, such as purposefully refraining from the use of digital services, even vital social ones. The rational response to the visibility norm, as Draper and Turow (2019) point out, is a resigned attitude toward data extraction.
While the expectation of data visibility suggests that rational-choice-making, self-managing individuals are visible, the expectation of visibility is externally imposed and limits choice. Accordingly, the visibility norm is fraught with class distinctions. Before Facebook’s privacy scandals and the widespread backlash, Mark Zuckerberg used to downplay the need for privacy. At the same time, he was purchasing the properties surrounding his house to safeguard his own privacy, demonstrating that privacy does matter, but is only attainable to those who can afford it. While disappearance is difficult, it can be possible for the select few—for most of us, the expectation is to be visible.
At the other end of the visibility spectrum, already marginalized societal strata, sometimes, do not have to escape the gaze of surveillance, as it does not even reach them to the same degree. Talking at the 2014 Public Health Symposium organized by the Quantified Self community, Margaret McKenna, then Head of Data and Analytics in the popular self-tracking brand Runkeeper, offered an illustrative example of the class divide of digital (in)visibility. Examining the user data that the company had collected for the previous year for the Boston area, she realized that there was data available for almost the whole region, including affluent and middle-class neighborhoods, but excluding a very poor area that generated no data points at all. On the one hand, self-tracking is another means of making consumers visible and thus controllable (see Beckett 2012). On the other hand, data invisibility isolates groups who do not voluntarily, or are not able to, produce data. If classification by means of data is part of business as usual, data invisibility emerging via such mechanisms can restrict the ability to receive necessary financial, legal, social and health services and assistance (Gilman and Green 2018).
When personal data affects people’s life chances, attention needs to be paid to the socioeconomic dimensions that foster or hinder opportunities for engagement with digital technologies. People with higher socioeconomic status have the material means and time to engage with such technologies (Lupton 2016). As welfare and life opportunities become linked to ability expectations regarding data production, data ableism in the form of data visibility emerges as a constituent part of data capitalism. For the affluent few, such as Mark Zuckerberg, deviation from the visibility norm may be desirable and attainable; for others, deviation from the norm is either an irrational choice or an inability to meet ability expectations.
Digital Ability Expectations and Data (Un)desirability
The ability to produce data that render you visible is but one division along which data-related ability expectations run, as visibility is a necessary but not sufficient condition to avoid discrimination. Another division concerns the ability to produce (or rather, to be the subject of) data that results in a beneficial outcome from an algorithmic system. When it comes to data-related ability expectations, the expected norm—the data ableist standard—is to be able to produce data that the system considers desirable. Under the condition of disappearance outlined above, individuals cannot avoid producing data about themselves that connects them with qualifiers such as mobility and consumption patterns, place of residence, skin color, (assumed) gender, dialect, familial relations, and social connections. Built into the ableist norm of data desirability is the expectation that this data leads to deserved outcomes. The flipside of the ableist expectation is disablement, when the unavoidably produced data affects life chances adversely.
While anyone can be misrecognized by algorithmic systems and can therefore experience their disabling effects, literature is rife with examples of the “intersecting forces of privilege and oppression” (D’Ignazio and Klein 2020) as data use discriminates against women, immigrants, people of color, and poor people. Critical scholars as well as computer and data scientists have scrutinized and questioned the way technologies and associated policies are employed to reinforce inequalities, outlining the discriminatory effects of data practices. One example of a context in which minorities face cumulative risks is data use and automation to predict criminal activity (Ferguson 2017). Examining predictive policing practices, Richardson et al. (2019) caution that past police misconduct generates systematically biased data that perpetuate discriminatory policing, creating vicious circles that are difficult to escape from. Moy (2019) suggests that police technology, such as predictive algorithms and facial recognition, may not only reproduce, mask or create new inequities but can also exacerbate policing harms and undermine oversight of discriminatory practices. O’Neil (2016) describes how algorithmic systems exclude and penalize the already underprivileged by identifying them as potential police targets, which restricts their access to financial and health services and reinforces barriers to education and employment.
These studies suggest that data pertaining to a person’s gender, ethnicity, skin tone, economic status and even name can lead to negative treatment by algorithmic systems. Anyone who is a source of such data can experience negative effects on their life chances. Avoiding being discriminated against becomes unattainable when systemic algorithmic bias privileges data that is constitutive of a person’s identity. The data ability expectations, then, can be met only by those whose personal and social identifiers are aligned with the parameters that the system values. Data disablism by data undesirability is, as such, a separate mechanism from well-established forms of societal discrimination, but the two intersect to produce oppressive results for the underprivileged.
Having examined data (in)visibility and data (un)desirability as main mechanisms of digital ability expectations that form and foster data ableism, in what follows, we further discuss the role of free market ideology in the rise of data-related discrimination and marginalization.
Discussion: Data Ableism and Free Market Ideology
At the root of the new societal divides, ushered in by datafication, are different forms of sorting, profiling and classification. Fostered by market institutions, they affect people’s life chances by setting exclusion boundaries and by providing differential opportunities for access to goods and services (Fourcade and Healy 2013). These new divides partially run across well-known fault lines such as socio-economic status. In “Automating Inequality,” Eubanks (2018) analyzes how data-driven automated decision-making intensifies the marginalization of the poor, who face increasing data scrutiny when they try to access health and public services, cross national borders or enter more highly policed areas. According to Eubanks, predictive models that target, scrutinize and punish low-income individuals construct “digital poorhouses” that profile, police and penalize the poor by classifying and criminalizing them, exclude them from public services, and attempt to predict and control their behavior. Algorithmic decision-making, then, creates an even more uncertain and dangerous environment for the poor (O’Neil 2016). However, Eubanks cautions that not just the poor, but everyone, is a potential resident of the digital poorhouse: the logic here is that technologies and policies targeting underprivileged segments of the population will eventually become adopted for the general population (see Charitsis 2019).
The conceptual tools of data ableism and disablism provide additional insight into the expansion of the digital poorhouse. As we have discussed, the data-related ability expectation of being visible to algorithmic systems maintains data production as a rational individual choice for most; being invisible means risking adversity and the disabling effects of market exclusion. On one hand, then, to produce data is to meet the ableist standard of data visibility. However, on the other hand, adversity could still result if the data thus produced is undesirable, so that individual qualities and characteristics are associated with failure to meet normative expectations. As Eubanks (2018) describes, when vital public services become contingent on data production, the already marginalized are marginalized further. Accordingly, data-driven systems invoke sentiments of vulnerability, discrimination, and exclusion to people who feel that they have to face additional hurdles in their struggles to meet basic needs (Petty et al. 2018). At the same time, when social provisions and public services become increasingly conditional on the production of desirable personal data, almost anyone can fail to access services and end up at the receiving end of disciplinary and control mechanisms. Ending up in the digital poorhouse simply requires that the produced data fails to meet prescribed, potentially arbitrary, ableist standards.
In general, ableism not only establishes differential status for social groups, but also functions as an impetus for alignment with the dominant norms (Kumar et al. 2012). In a similar way, prescribed digital ability expectations function as “normalizing apparatuses of disciplinarity that internally animate our common and daily practices” (Hardt and Negri 2000, 23). This development is an exemplar manifestation of “neoliberalism in action” (Lazzarato 2009) in the sense that the society is algorithmically further driven toward a market-based logic that promotes competition and the management of the self, simultaneously creating deep inequalities. As the market becomes an increasingly important arbiter of social and economic organization, a growing number of people have to endure unequal treatment, injustice, and even violence (Fırat 2018). According to Dardot and Laval (2014), neoliberalism has attained the role of an “existential norm” where competition becomes a moral imperative, market logic permeates social relations and individuals are expected to reconfigure themselves as enterprises. Critical disability scholar Goodley (2014) uses the term “neoliberal-ableism” to emphasize that free market ideology has advanced the pursuit of the (hyper) normal which celebrates self-responsibilization. He explicates that this not only shrinks the welfare state and places an additional burden upon people with disabilities, but also creates new forms of ability that individuals are expected to attain using their own socio-economic resources. Data ableism, we argue, works in a similar manner: driven by an insatiable need for capital accumulation that can only be satisfied through the adoption of free market ideals, digital capitalism establishes data-related ability expectations that can have severe or devastating consequences for those who fail to meet them. Data ableism “inaugurates the norm” (Goodley 2013) in data production, constructs an inferior other, and legitimizes preferential treatment or discriminatory practices and policies. The proliferation of data-generating digital technologies that record, measure, and compare human activities not only intensifies surveillance and normalization but also allows for the “othering” of those who fail to reach prescribed expectations (Lupton 2015).
As unequal and exploitative relations are increasingly seen to lie at the heart of the data economy, issues pertaining to control and ownership of personal data attract increasing attention among academics, policy-makers, and activists. Many emerging initiatives attempt to offer market-based solutions that aim to provide financial remuneration or other rewards and benefits for user data, but end up reproducing and legitimizing, rather than contesting, existing models of data appropriation (Charitsis et al. 2018; Lehtiniemi 2017). The concepts of data ableism and disablism highlight that such solutions assume rational, responsible, and self-optimizing individuals, upholding and propagating the ableist standard of desired data subjects. For those who are able to meet these expectations, market-oriented solutions based on data ownership and control might appear personally beneficial; for others, such solutions exacerbate the risks of exposure and vulnerability, which, as we have discussed, potentially intersect with other societal disadvantages.
One way to bypass such hazards is to reconsider what it is that we are trying to control or own. Instead of treating personal data as an external commodity that can be exchanged between different parties, Floridi (2005) suggests that it should be seen as constitutive of one’s personhood. As he underlines, ‘My’ in ‘my information’ is not the same ‘my’ as in ‘my car’ but rather the same ‘my’ as in ‘my body’ or ‘my feelings’: it expresses a sense of constitutive belonging, not of external ownership, a sense in which my body, my feelings and my information are part of me but are not my (legal) possessions. (p. 195)
Following this ontological premise, data-based discrimination can be viewed as unequal treatment against a constitutive part of ourselves—our data bodies (see Petty et al. 2018)—and thus, parallels between discrimination based on bodily or mental expectations and discrimination against perceived inefficient or undesired data bodies become clearer. At the same time, this premise suggests that the way to address data disablism is not through market-based mechanisms. Examining the disparities in life chances arising from datafication gives reason to expect that new forms of ableism and disablism are endemic to market-based, data-driven algorithmic decision-making. Instead of promotion of the market as the preferable form of social and economic organization, ways to address data disablism could more fruitfully be sought by challenging, and ultimately dismantling, the power structures that foster such discrimination in the first place. This indicates that different interventions to shape data politics (Bigo et al. 2019), such as research and practical experimentation with data governance (Lehtiniemi and Ruckenstein 2019) or critical consciousness building (Markham 2021), could be formulated to explicitly take into account the idea of data-related ability expectations and their flipside, forms of data disablism.
Conclusion
More than two decades ago, Castells (1997) professed that while the information age provided the necessary opportunities to minimize social polarization, exclusion and inequalities, it had thus far aggravated them. In the ensuing years, despite the celebratory rhetoric of (big) data evangelists and without rejecting the opportunities that data can offer for advancing egalitarian social values, the tide has not been reversed: datafication clearly benefits some and not others. As we have discussed, mass data production, coupled with the rise of algorithmic decision-making, has enabled forms of data ableism and disablism that negatively affect the life chances of those segments of the population that cannot reach prescribed data expectations.
As digital technologies become increasingly embedded in everyday life, data will attain an even more central role in social and economic organization. It would be erroneous, though, to treat data-based algorithmic discrimination as an external consequence of datafication. The inequalities embedded in algorithmic decision-making are neither unintended nor unavoidable outcomes of technological progress. Viewing them in such a deterministic way would obscure the fact that technology is not neutral: it is developed and utilized within socioeconomic conditions that hinge on exploitative economic relations that oppress individuals and groups of people. Following the conviction that technology is laden with societal values and biases leads to the observation that the digital world cannot distance itself from existing social inequalities (Selwyn 2004).
While algorithmic bias and discrimination have been documented in both academic scholarship and media reports, antidiscrimination discourse has been dominated by approaches that, rather than shedding light on the structural processes that foster discrimination, foreground the actions of bad actors (Hoffmann 2019). Similarly, protection against the dangers of algorithmic discrimination follows the premises of hyper-individualism, where responsibility is bestowed on each individual person (Bigo et al. 2019). In order to reject such approaches and to resist technological determinism, it is important to ascertain that technology and data can be used in ways that will curtail existing inequalities and injustices. Here, critical disability scholarship has highlighted the need to build relationships across lines of differences, forge alliances on the intersection of disability, race, class, gender or sexuality against societal marginalization, and develop multidimensional and inclusive forms of resistance (see Annamma et al. 2018; Ferri and Connor 2014). Similarly, critical data studies scholars have called for attention to be paid to the underprivileged, who are more affected by the digital divide, and to opening up the space for theorizing change that defies data injustice and inevitability (see Milan and Treré 2019). The notions of data ableism and disablism that foreground the intersectional and structural nature of data inequalities represent an attempt toward these directions. By shedding light on the way in which data capitalism excludes individuals and communities based on ability expectations, and engaging in interdisciplinary academic dialog, this approach strives to provide a platform for intersectional analyses of data discrimination and marginalization and to foster academic and societal alliances that will enable forms of resistance to emerge.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Tuukka Lehtiniemi received funding from the Academy of Finland project “Re-humanising automated decision-making”.
