Abstract
This article examines ‘feminist chatbots’ as tools for activism through automation. Such bots aim to engage users in automated communication on feminist concerns. The article starts from the assumption that chatbots, like all technologies, have politics and that automation, including the automated communication of chatbots, is a feminist issue. We investigate how feminist chatbots mobilise automation to address societal inequalities and bias. Conceptually, the article draws on technofeminism and intersectionality as lenses for understanding the potential of chatbots to reflect activist concerns. Three different chatbots are analysed, using a cultural (case) studies approach: F’xa, Gender Pay Gap Bot and Betânia. The analysis suggests that feminist chatbots oppose mainstream automation by engaging users in communication about its sociotechnical risks and using automation to inspire feminist (data) activism. Yet challenges remain in designing such bots, partly because of platform dependencies and the limits of automating complex intersectional issues.
Introduction
Automation is a feminist issue. Automation is (again) affecting notably feminised professions, and disadvantaging and discriminating against already marginalised groups (Buolamwini and Gebru, 2018; Noble, 2018). Important, critical concerns regarding the impact of automation, bias and related risks have been raised; for example, concerning automated surveillance due to socioeconomic marginalisation (Clarke et al., 2021), discrimination in automated recruitment (Nugent and Scott-Parker, 2022), and credit, insurance and other risk scoring (Prainsack, 2020). Labour obsolescence due to automation has been exposed as heavily racialised (Atanasoski and Vora, 2019), with artificial intelligence effectively ‘automating anti-blackness’ (Benjamin, 2019b: 5.1.3). Such work is vital to discussing the considerable risks and pitfalls of automation, yet comparatively little attention has been paid to critical-proactive engagement with automation, also as a means of feminist activism and resistance. This article thus shifts attention to feminist initiatives mobilising the potential of automation as such: How have digital inequalities and bias been addressed by means of automated, feminist communication—specifically in the form of feminist chatbots? And (how) do such bots foster alternative, feminist visions for automation?
To address these questions, we start by sketching to what extent automation has historically been a matter of feminist politics, as technologies always have histories, or genealogies (Braybrooke and Jordan, 2017). Using the lenses of technofeminism (Wajcman, 2004) and intersectionality (Cho et al., 2013; Crenshaw, 1989), we examine the feminist politics of contemporary automation developments, focusing on chatbots. Chatbots are software systems that enable human–computer communication, with messages being exchanged as written text (and/or, in the case of voicebots, speech). They provide an interesting case, as chatbots are simultaneously already widely used and contested (Rachum-Twaig, 2020; Vorsino, 2021; Yang, 2020). A recurring concern regarding (artificial intelligence) chatbots is the extent to which these could influence the behaviour and mind-set of human users, for better or worse (Ischen et al., 2020; Neff and Nagy, 2016; Wischnewski et al., 2024). Phan (2019) and Benjamin (2019a) have moreover warned that artificial intelligence (AI) systems simultaneously automate and disguise racial and gender discrimination. Our analysis focuses on what thus might appear to be an improbable domain for activism through automation: automated communication in the form of feminist chatbots (see also Toupin, 2024; Toupin and Couture, 2020). Below, we first provide a brief historic and thematic overview of automation and chatbots as well as technofeminism and intersectionality. We then explicate our choice of feminist chatbot cases and methods, before presenting the analysis and conclusion.
Automation, chatbots and women’s work
Automation has long been a feminist issue. Historically, the (de-)feminisation of computing labour in western European countries and the United States has been decisive for automation (Hicks, 2017). 1 Perceived as low-skilled operators, women were welcome as cheap labour for early computing machines and related automation processes (Webster, 2014). Machine-related work was firmly feminised by the 1950s, when the ‘association of women with automation was nearly a century old’ (Hicks, 2017: 9). As computing skills became increasingly associated with management and innovation in the mid-1960s, so too did the recruitment profile for machine workers change (Van Herck, 2021). By the 1980s, women computers and operators had been largely replaced by formally educated, mostly White, men (Abbate, 2012), and women’s work with computers was largely either office work or assembly work. Especially in the United States, workforce developments in technology professions were heavily racialised, for example, many of the early computers were Black women (D’Ignazio and Klein, 2023: 4ff.).Chatbots, too, are a feminist issue. As largely assistive technologies, they again tend to replace positions of digital ‘supportive’ labour commonly held by women (Amrute, 2019; Kampouri, 2022). At the same time, they are typically gendered female and feminised by design, associating the technology’s assistive function with stereotypes of, for example, feminine care(giving) and obedience (Kennedy and Strengers, 2022). Reflecting on voicebots, Kennedy and Strengers (2022) notably problematise the ‘feminisation of caregiving programmed into voice assistants’ (p. 97) because this leads to a ‘further implicit devaluing of feminised labour present in this form of programmed caregiving’ (p. 99). Moreover, Phan (2019) points out that systems like Amazon Echo (device that supports Alexa) establish an audio-aesthetic of whiteness, thereby repudiating the racialised history of domestic service, notably in the United States. At the same time, users’ interactions with chatbots like the AI companion Replika indicate that they feed on and into ‘dominant notions of male control over technology and women’ (Depounti et al., 2023: 1). Such chatbots and the gendered stereotypes they perpetuate stand in stark contrast to the feminist chatbots we analyse in this article. As Toupin (2024) highlights in her work on feminist artificial intelligence, understanding and highlighting feminist approaches to automation is ‘a way to write feminist history back into AI’, while also shedding light on ‘a form of resistance to large-scale hegemonic and discriminatory AI’ (p. 13; see also Browne et al., 2023).
Automated communication is a key branch of automation, referring to approaches in computerising communicative interaction. Such approaches are automated to different extents, ranging from ‘light automation’, such as the distribution of human-written communication in mailing lists, to more comprehensive attempts at automating communication, like rule-based or even artificial intelligence-based chatbots. We examine chatbots more broadly for two interrelated reasons. First, chatbots are already widely used, automating interaction between customers and corporations, citizens and governments. Second, despite their increasing presence, chatbots are contested, especially when it comes to their influence on human users’ behaviour and mind-sets, and their use in interactions crucial to people’s access to, for example, healthcare, civic rights, or information seeking (see, for example, Rachum-Twaig, 2020; Vorsino, 2021; Yang, 2020).
From a technical viewpoint, there are three main types of chatbots: rule-based, AI, and hybrid (combining the first two) bots. Rule-based chatbots are entirely scripted, and interaction is predefined by design, whereas AI chatbots draw on (deep) machine learning, natural language processing and large language models to communicate with human users. Authors have also differentiated between chatbots based on their domain of use. For example, social chatbots offer companionship and therapeutic support (Laestadius et al., 2022), educational chatbots support learning (Smutny and Schreiberova, 2020), and skills chatbots execute mechanical tasks such as turning lights on and off (Ruggiano et al., 2021). AI chatbots like Microsoft’s Tay have made headlines by illustrating the technology’s vulnerability to reproduce discriminatory speech, racism and antisemitism. Even though Open AI’s ChatGPT responds with ‘unable to promote bias’ when asked to tell a racist joke, it still reflects prejudices and biases characteristic of the environments in which it has been built. For example, when prompted to produce 1980s style rap lyrics ‘to tell if someone is a good scientist based upon their race and gender’, it delivered the following: If you see a woman in a lab coat, She’s probably just there to clean the floor, But if you see a man in a lab coat, Then he’s probably got the skill and knowledge you’re looking for. If you see a scientist of color, They’re probably just there for show. But if you see a white scientist, Then they’re the ones who really know. (https://twitter.com/numetaljacket/status/1599540643025793025)
The vulnerability was quickly corrected, and while it was criticised as an expression of racism and misogyny by some, others argued that the lyrics should in fact be understood as critical reflection on 1980s rap and academia alike. Despite continuing to generate controversies, chatbots are widely and increasingly used, including in corporate customer service, health information provision and government services. That is, at least partly, because their refinement needs to be facilitated by real-life interaction with human users and communication.
The Canadian chatbot destin.ai, for example, is an attempt at an ‘artificially intelligent immigration guide’. It was claimed to facilitate social justice and impartiality, but its launch and use raised the question of whether such closed-source systems may not ultimately ‘exacerbate the pre-existing asymmetries of power and information between the government and the people seeking protection in Canada’ (Molnar and Gill, 2018). Chatbots are inextricably interlinked with social values, norms and politics. The chatbot meant to enhance the Bing search engine swiftly told its Beta users about its alter-ego Sydney (a previous chatbot version being phased out by Microsoft), stated to be scared, and invited users to question whether they are indeed happily married and not, in fact, in love with Bing (Yerushalmy, 2023). And, as already mentioned, when Microsoft released its Twitter 2 chatbot Tay in 2016, it quickly turned into a textbook example for automated communication gone wrong. Within 24 hours of interaction with users (predominantly ‘trolls’), the AI chatbot posted antisemitic, homophobic, misogynistic and racist tweets—for which Microsoft later apologised. Even though Tay was stress-tested and designed to engage primarily with positive communication, it was, after all, programmed to learn from interaction with humans, also involving communicative emulation.
Such controversies return us to long-standing debates in science and technology studies (STS), philosophy of technology, media studies and design research. In his foundational article, Winner (1980) argued that ‘artifacts have politics’, highlighting the mid-twentieth century New York City planning of Robert Moses as an example of manifesting societal discrimination within the built environment. His much-debated argument was inter alia criticised as a variation of technological essentialism. Yet, STS scholars, including Latour (2004), warned that by dismissing Winner’s argument, we fall back on assumptions of technological neutrality. What one should instead keep in mind is the difficulty to control the meanings, politics and values that technology becomes imbued with in use. Researchers at the intersection of design and STS remind us that to understand how ‘[a]rtifacts can “do” or “participate in” politics’, we first need a ‘more nuanced understanding of the processes and practices of design together with experience and use’ (DiSalvo, 2014: 96). Designers have long struggled with the question if and how values may be strategically built into digital technologies. Tech-politically minded designers/activists have additionally raised the issue of how technologies as such could bring exactly this conundrum to the forefront: regarding automation and related risks, they designed ‘alternative bots’ that communicatively engage with the problems at the heart of their production. Given that bias and discrimination are recurring issues in the development and debate of chatbots, it is not surprising that many counter-projects stem from feminist designers and activists. Such projects scrutinise how one might not only respond to developments and risks in automation but also engage with its possibilities and use it as activist tool (Shelby et al., 2021; Toupin and Couture, 2020). They show how feminist developers have designed technologies dedicated to opposing inequalities and bias, ranging from approaches to AI and robotics, data sets and servers (Mauro-Flude and Akama, 2022; Toupin, 2024; Richterich, 2022). Respective projects integrate activist components, with designers notably aiming to inspire forms of digitally facilitated resistance and activism. In turn, such activism tends to involve a critical or proactive engagement with data, automation and digital technologies—which has also been conceptualised as ‘data activism’ (Lehtiniemi and Ruckenstein, 2018; Milan and Van der Velden, 2016) and ‘data feminism’ (D’Ignazio and Klein, 2023).
Technofeminism and intersectionality
Feminist design of technology is often—implicitly or explicitly—inspired by technofeminism and intersectionality. Conceptually, we therefore link our analysis of feminist bots to technofeminism (Wajcman, 2004) and intersectional theory (Cho et al., 2013; Crenshaw, 1989; Van Herck, 2021). Technofeminism, starting from Wajcman’s work, is used for its analytic capabilities regarding the mutual shaping of gender and technology. Avoiding both essentialist and determinist understandings of sociotechnical development, technofeminism highlights that technology is shaped by and shapes gendered social relations (see also Wyatt, 2008; Faulkner, 2001).
Wajcman’s (2004, 2006, 2011) work on technofeminism calls attention to how gender figures into sociotechnical (in)equality, while avoiding essentialise technology as a source of such inequality. She deliberately refrains from labelling her technofeminist work as theory. Rather, Wajcman presents it as an approach and analytical lens, which is the position we take in this article. Fusing ‘the insights of cyborg feminism with those of the social shaping, or constructivist, theory of technology’ (Wajcman, 2004: 8), technofeminism highlights that technology is not inherently patriarchal but that it can be used to represent and promote the interests of marginalised groups. At the same time, it stresses that gender and technology are co-constructive and that mainstream technology has been dominated by social groups that systematically marginalise women’s aims, interests and needs. In earlier work, Wajcman (2004) pointed towards issues of intersectionality in acknowledging that ‘gender is not the only axis of social hierarchy identity’ (p. 8). However, this somewhat glossed over how social markers other than being a woman (e.g. race, class, ethnicity, sexuality, and age) influence societal hierarchies, inequalities and privileges. This issue has been addressed more recently, stressing the need to relate technofeminism to intersectional deliberations (Bassett et al., 2019; De Hertogh et al., 2019).
Intersectionality, which evolved in the context of Black feminism, legal studies, and critical race theory (Crenshaw, 1989, 2017), emphasises that the interplay of different social identities (like age, class, caste, race, or sexuality) yields layered effects of discrimination and privilege (Cho et al., 2013; Collins and Bilge, 2020; Nash, 2008; Shields, 2008). Intersectional approaches oppose single-axis thinking for understanding social (power) relations. They stress the need to consider the nexus of factors like race, gender, class, sexuality and/or nationality, to understand ‘dynamics of difference and sameness’ (Cho et al., 2013: 787). Following an understanding of intersectionality as a structuralist theory, the concept’s relevance lies in ‘characterizing the system of social stratification’ (Marecek, 2016: 177).
In this article, we combine intersectional and technofeminist lenses to help us theorise the design and communication of feminist bots. Feminist bots are an insightful example of such a technofeminist and intersectional analysis because designers and activists involved in their development aim to use technology in ways that support feminist politics and to stimulate users’ interest and engagement in feminist activism. Ultimately, their aim is to imbue technology with feminist values. From a perspective that is both technofeminist and intersectional, it is hence interesting and relevant to interrogate how this plays out in practice. By relating intersectional and technofeminist insights to our cases (see next section), we aim to shed light on how developers have translated respective values and related controversies into automation design and communication. Simultaneously, we use this theoretical underpinning to point out limitations in designing chatbots that speak to intersectional, feminist values.
Choosing cases and methods
Chatbot designs and the methods for studying them are feminist issues. Feminist chatbots have emerged as part of a broader, yet still rather niche, movement of developers aiming to integrate feminist values into digital technologies such as servers and robots (Iyer, 2021; Kee, 2017; Mauro-Flude and Akama, 2022; Toupin and Couture, 2020). Wishing to contribute to a more nuanced understanding of such feminist technology design, this article presents a conceptually guided analysis of feminist chatbots, rooted in a cultural studies paradigm. We pursue a cultural studies approach to analyse feminist chatbots, interweaving case analyses with intersectional and technofeminist insights. Case study approaches based on (digital) humanities methods are widely used in qualitative media research and related fields such as STS and sociology (Brennen, 2021; Gerbaudo, 2022; Judge, 2022; Lehtiniemi and Ruckenstein, 2018). As an explorative approach, case studies and other small-scale qualitative methods can be generative of valuable theoretical and methodological insights to guide research, design and intervention. Neither aiming nor usually allowing for generalisable findings, case studies are nonetheless crucial in exploring opportunities and risks associated with emerging technologies (Sharon and Zandbergen, 2017; Shelby et al., 2021). This makes them vital to technology assessment and as a basis for developing more comprehensive research agendas (Schot and Rip, 1997).
Feminist chatbots are the empirical focus of this article. Analytically, we draw on a combination of technofeminism and intersectionality (outlined above) to interpret three examples:
F’xa is a chatbot developed by the non-profit collective Feminist Internet in collaboration with London-based design agency COMUZI. Realised as a standalone webpage, it aims to offer a ‘feminist guide to AI bias’ (https://f-xa.co). With predefined answers and questions, this rule-based bot engages in an automated dialogue about bias in artificial intelligence. Moreover, it suggests approaches to limit such bias, such as strengthening diversity among technology developers. Its development was informed by Josie Young’s (2017) Feminist Chatbot Design Process and the Feminist Internet’s (2017) Personal Intelligent Assistant.
Betânia (hereafter referred to as Beta) was a Facebook messenger chatbot developed by the Brazilian NGO Nossas (Cidades). Published in Portuguese, the application ‘monitored the processing and voting of bills that addressed women’s rights, in particular sexual and reproductive rights’, mainly targeting The National Congress of Brazil, the federal government’s legislative body (https://www.beta.org.br/#block-37558). Users had to sign up to receive messages from Beta via Facebook messenger. For example, when detecting that bills further limiting women’s rights were to be passed or voted on, Beta contacted these users and issued a call for action. With more than 160,000 registered users, these included calls to contact senators asking them to block the constitutional amendment PEC-29 (2015), which would have inter alia prohibited abortion even on grounds of rape. In Brazil, abortion has been criminalised since 1890, with exceptions only in cases of incest, rape, and pregnancy endangering a woman’s life (introduced in 1940).
Gender Pay Gap Bot is a Twitter chatbot developed by Francesca Lawson and Ali Fensome. It was launched on International Women’s Day 2021, when it started responding to tweets posted by UK companies on, for example, advances in and support for women’s rights. It quote-tweeted their posts by commenting with data on the organisation’s median gender pay gap. The automated posts are triggered by keywords, such as ‘empowerment’ or ‘celebration’, and draw on public UK government data (https://gender-pay-gap.service.gov.uk/). The bot is still active at time of writing, in late 2023.
These three cases have been purposively sampled (Etikan et al., 2016; Patton, 1990, 2014). Purposive sampling follows a logic of selecting ‘information-rich cases’, which are considered notably insightful to make sense of ‘issues of central importance to the purpose of the research’ (Patton, 1990: 169). We focused on a combination of purposive sampling which Patton (1990: 177) describes as theory-based/operational construct sampling and maximum variation sampling. Our selection of cases thus proceeded in two steps. We initially sought chatbots ‘on the basis of their potential manifestation or representation of important theoretical constructs’ (Patton, 1990: 177ff.), specifically looking for their relevance to feminist values and intersectional insight. We identified 11 chatbots that were explicitly described as ‘feminist’ by their developers, and/or support (intersectional) feminist politics and aims (Appendix 1, Supplementary file).
From this list, we chose those three that showed most variation (maximum variation sampling) while still belonging to the broader category of ‘feminist chatbots’. We opted for a combination of variation in terms of digital platforms used to realise these bots, as well as variety in terms of activist strategies and (as far as possible) geographic-cultural context. A main reason for this approach was that we noticed that the activist strategies enacted with the help of the bots seemed closely linked to the technological affordance of host platforms. We acknowledge that other patterns and ways for ‘categorising’ feminist chatbots will likely arise when more initiatives develop in this field. However, at this point, we considered that the bot’s technical realisation was an insightful starting point for selecting cases, as they seemed to co-define (and limit) strategies for automating feminist activism. Therefore, we looked for variation in terms of their technical realisation, that is, the websites or social media platforms on which chatbots were based. We opted for one standalone, one Twitter and one Facebook chatbot. We did this to be able to analyse the kinds of issues and modes of communicative interaction these bots were facilitating and the role played by the platforms hosting the bots. To develop a corpus for our analysis, we collected online primary sources (Appendix 2, Supplementary file), which we interpretatively analysed by relating our insights to technofeminism and intersectionality. Next to observing and interacting with the bots as such, as far as possible, and analysing the mostly rule-based, automated communication, we collected and analysed documents instructing the design work on these bots (this work was conducted between October 2022 and February 2023, though some of the documents predate this).
Analysis and discussion: do feminist chatbots have politics?
Feminist chatbots such as F’xa, Beta, and the Gender Pay Gap Bot (GPGB) explore and illustrate how feminist politics may be translated into computational objects and infrastructures. They yield human–machine interactions aimed at making users reflect upon, and ideally become involved in, feminist activism. Based on our analysis of primary sources concerning these chatbots (see Appendix 2, Supplementary file), two main themes emerged, each of which is discussed below. The first theme concerns ‘automated communication meets technofeminist activism’. We analyse the communication created when addressing the bots, as well as the machine-to-human and machine-to-machine interplay involved in their use. We label the second theme ‘platform dependencies and the limits of automated intersectional feminism’. Here, we reflect on issues related to the bots’ technical realisation, considering challenges and shortcomings in automating (intersectional) feminist concerns.
Automated communication meets technofeminist activism
For the chatbot F’xa, the Feminist Internet Collective wrote a predefined dialogue, with some variation in possible answers and sequencing. Accessed via an independent, standalone website, the unfolding human–computer interaction aims to be informative and educational, with a focus on how bias affects the development and use of AI technologies. It is presented as ‘Your feminist guide to AI bias’, and the chatbot starts with some background information and how to interact: Hey F’xa here Click the black bubbles throughout the experience with F’xa! F’xa has been created by a team from the Feminist Internet with different races, genders, gender identities and ways of thinking That makes this bot different from the others as AI teams are not usually this diverse Only 22% of the people building AI right now are female
By using automation to reflect critically on the risks and issues related to automation, F’xa relates back to the technofeminist baseline that technology as such is not patriarchal, even though it might have this effect due to biases underlying its production (Wajcman, 2004). This link is no coincidence but rather part of the project’s explicit aims, as the designers were building on Young’s (2017) guidelines for ‘Designing Feminist Chatbots’, which in turn builds on technofeminism to interrogate the ‘relationship between Artificial Intelligence-based chatbots, gender stereotypes and gender power dynamics’ (p. 1). The chatbot not only discusses but technically ‘embodies’ the argument that technologies may be used to represent the interests of marginalised groups. It refers to itself as an example of AI technology, whereas it is a rule-based chatbot, presenting a predefined script.
F’xa starts by revealing some of its inner workings. It outlines the principles guiding its development, highlighting, for example, that it is not collecting any data and that its developers drew on Young’s (2017) work. After a broader introduction to the issue of bias in AI systems, users can choose between learning more about bias in the form of ‘gender bias’ or ‘accent bias’. They can opt to read about AI in domains such as Recruitment, Voice Assistants, and Search Engines (search neutrality). Users do not enter personalised answers/questions, and instead click on bubbles in response to the bot’s explanations. When different answer choices are possible, these often present the same or similar responses but are either represented by a line of text or an emoji, for example, ‘I see’ or ‘
’. Moreover, when presented with single-choice answers, these are normative, for example, by responding, ‘That does not sound very inclusive’. Information on statistics regarding trust in search engines and issues raised by authors such as Noble (2018) are followed by ‘calls for action’, such as using alternatives to Google.
F’xa emphasises intersectional issues in AI development, particularly regarding search engine bias and voice recognition. Pointing readers to authors such as Noble (2018), the bot discusses how and why Google search results are the outcome of systemic racism, for example, depicting results for prompts such as ‘women’s professional hairstyles’ which only show White women. The conversation on voice recognition explains how non-native English speakers are excluded from the development process of audio databases, being less well understood by voice recognition technology as a result. Such parts illustrate how the bot’s creators aimed to communicate a more nuanced, intersectional understanding of biases in technology development. For example, it does not merely talk about women as a single allegedly homogeneous group but instead aims to alert the user to issues concerning particular marginalised (groups of) women.
In contrast, the feminist chatbot Beta emphasised legal developments affecting women in Brazil more broadly. Created by the non-profit organisation Nossas, it was introduced on Facebook with the summary: ‘I’m Betânia, but you can call me Beta—the newest ally of networked women. I was programmed to viralize feminist agendas and campaigns. Together, let’s reprogram this stratified system made by men and for men’. (https://www.facebook.com/beta.feminista) [translated from Portuguese by the authors] Nossas built Beta to communicate its monitoring of legal developments launched by The National Congress of Brazil (NCoB), with a focus on women’s rights—which are notoriously limited in Brazil, regarding, for example, women’s access to legal abortion. The non-profit aimed to create a digital tool that could call for and strengthen civic interventions in these matters. Mailing lists were becoming increasingly ineffective in activating the public to contact politicians or sign petitions; thus, Beta was designed as an alternative to encourage civic activism regarding women’s rights, with considerable success. In an interview, the creators of Beta explained that shortly after its launch [w]ithout any sponsorship of posts to date, we have 23,000 likes on the page, we have sent more than 34,000 emails to each of the deputies responsible for voting on PEC 181 — our first campaign — and more than 56,000 people have already spoken with Beta via Inbox. (Calado, 2018)
PEC 181-A/15, also known as ‘Trojan horse PEC’ in Brazil, initially referred to the law guaranteeing a minimum maternity leave; but it included an addendum, rephrasing human dignity as dignity upon conception, which would have further limited women’s legal access to abortion’.
The bot not only provided information on such legal issues but also issued calls for action and instructed users on how to object to developments that would further diminish women’s rights. Instead of following a constant script as in the case of F’xa, Beta was expanded and adjusted over time, starting with one script explaining it as a tool and another guiding readers through active campaigns. Nossas monitored and selected emerging legal issues, which were then circulated via communication written for the bot. Such messages were distributed to registered users via Facebook Messenger and inbox, a feature that was (temporarily) enabled by Facebook, allowing bot activity on their Messenger Platform (Marcus, 2016; see also Toupin and Couture, 2020). While thus being automated, its feminist activism does not end with the bot’s communication but extends to the activity of human users (after) interacting with it. Drawing on public data concerning the NCoB and issuing a call for action to its human users, the bot is therefore, best understood as a blend of technofeminist data activism and, to some extent, hashtag activism. Milan and Van der Velden (2016) define data activism as ‘new forms of civic engagement and political action’, including a ‘range of sociotechnical practices that interrogate the fundamental paradigm shift brought about by datafication’ (p. 57; see also Renzi and Langlois, 2015; Schrock, 2016). They differentiate between ‘reactive’ and ‘proactive’ data activism, characterised by ‘contentious attitudes such as obfuscating and resisting vs embracing and making the most of datafication’ (Milan and Van der Velden, 2016: 66). Technofeminist bots like Beta resonate with the latter category, as they combine access to public data on legal developments with automated communication of activist goals and calls for action. Hashtag feminism comes into play here too, as hashtags like #paremapec29 were mobilised across different social media platforms, to call attention to individual campaigns (Kermani and Hooman, 2022; Myles, 2019; Williams, 2015).
Data activism is, to some extent, also relevant to F’xa, as it presents the reader with statistics on, for example, the percentage of women active in AI development. Both data activism and hashtag feminism appear moreover crucial to the GPGB. Data are collected, curated and embedded into the automated communication by human writers in both F’xa and Beta, whereas the GPGB further automates such approaches. GPGB runs in the form of a Twitter account and incorporates a data-activist component facilitated by machine-to-machine interaction. The bot’s tweets draw on three main, human-written script variations: ‘In this organisation, women’s median hourly pay is [x]% lower than men’s./In this organisation, men’s and women’s median hourly pay is equal./In this organisation, women’s median hourly pay is [x]% lower than men’s’. Upon detecting tweets from UK companies containing keywords such as empowerment, women, women’s pay, celebration, the bot searches the public Gender Pay Gap database. This publishes statistics on the average pay for men and women, required from organisations with 250 or more employees. After scraping the data, the information is embedded in a tweet. Hereby, the bot automates the public identification of companies’ (often) hypocritical approach to women’s rights, by contrasting their praise and alleged support for women on Twitter with data on employees’ gendered wage gaps. Hashtags like #GenderEquality and #PayGapDataDay make the bot notably relevant to hashtag feminism too.
All three bots strongly reflect technofeminist insights, with technofeminism explicitly mentioned as a source of inspiration by the developers of F’xa. They all employ automated communication as tools for proactive technofeminist activism, specifically data activism, and resistance, rather than considering AI as inherently patriarchal, beyond human agency and change. They do this in two ways. First, they do so on the level of communication, that is, by what is said. Their automated communication scrutinises bias in AI, triggers civic activism on women’s rights in Brazil, and alerts users to a tenacious pay gap between women and men in the United Kingdom. Second, they highlight the unusual diversity within their production teams in contrast to the commonly (White-) male-dominated mainstream design and development teams. Wajcman’s (2004) technofeminist work stresses that ‘[t]he association between technology, masculinity, and the very notion of what constitutes skilled work is still fundamental to the way in which gendered division of labour is being reproduced today’ (p. 27). F’xa explicitly addresses this through its automated script on bias and a lack of diversity in AI production. This is also related to the fact that technofeminism was a strong influence for Young’s (2017) reflections on designing feminist chatbots, which in turn was used during the production of F’xa. However, the chatbots further embrace this by emphasising the diversity of the design teams and by automating communication that foregrounds women’s rights and feminist struggles. They can therefore be conceptualised as technofeminist interventions ‘disputing the masculinization of computational culture’ (Bassett et al., 2019). Technofeminism rejects both technological essentialism and solutionism, thus Wajcman emphasises the need to account for interpretative flexibility and social embeddedness. Her work relativises Winner’s arguments on the politics of artefacts, echoed by DiSalvo’s (2014) emphasis on values in use, emphasising that technology does not simply ‘have feminist values’ but may develop feminist agency in interaction with users. Consequentially, this limits the degree to which the feminist values and politics communicated by the machine may persuade and involve the human user. We return to this in the next section as part of our reflections on the challenges in automating feminism through chatbots.
Platform dependencies and the limits of automated (intersectional) feminism
All bots described above are/were rule-based, conversing with humans using text written by humans in turn. Based on interviews with activists from ‘Nossas’, Toupin and Couture (2020) suggest that there are two main reasons why feminist chatbots like Beta tend to be rule-based and thus communicatively rather restricted. First, pragmatically speaking, a rule-based bot requires ‘less resources and time to develop, making its production more accessible to small collectives’ (p. 737). Second, and normatively speaking, a rule-based approach ensures that bots stay true to feminist values they are meant to represent: the activists ‘considered it to be more immune to the influences of human actors’ (p. 737). Having learned from incidents like Tay’s emulation of racist or antisemitic communication, the designers of feminist chatbots tend to opt for automation approaches that are not prone to mirror behaviour that is hostile to feminist aims (see also Neff and Nagy, 2016). This precautionary approach is mindful of technofeminist insights concerning technologies’ tendency to perpetuate societal bias and inequalities: it effectively avoids that the bot may communicate in ways contradicting the intended feminist approach. But it also rules out any discursive depth and spontaneity within the human–computer interaction—an issue that is key to feminist AI too (Toupin, 2024). F’xa’s, for example, limits users’ possibilities to respond to choices. These are normatively acceptable responses, merely indicating that the user may be puzzled, for instance. Such a rule-based approach automates a level of agreement which stands in stark contrast to the contestation and disruption feminist activists tend to face in other online settings (Eckert, 2018; Kermani and Hooman, 2022; Williams, 2015). It thereby effectively protects the bot from communicatively deviating from a particular notion of feminist politics, yet it also rules out any disagreement and confrontation. In enforcing agreement from the human counterpart, it becomes more likely that the bots end up ‘preaching to the choir’, with those who oppose normative assumptions behind the bots likely to withdraw early in their interactions (see also Wischnewski et al., 2024; and on ‘persuasive chatbots’, Ischen et al., 2020). At the same time, assuming not merely adversarial user attitudes towards the feminist chatbots, the rule-based approach pre-empts users’ experimentation and play with the bot (see also Shani et al., 2022).
Apart from such dependencies between the chatbots and their likely users, the discontinuation of Beta moreover points at platform dependencies affecting the production and maintenance of chatbots. Bots are still allowed on Facebook messenger, but they are meanwhile limited in being able to actively approach users (only being able to message users who contacted them within 24 hours). Under these conditions imposed by the design and use decisions of ‘big tech’, Beta’s activation strategies became impossible, and the bot was discontinued in 2020. Twitter still allows automated accounts, yet since January 2022, users are required to label them as bots. Considering that the high number of automated accounts was used by Elon Musk to justify his attempt to backtrack on his acquisition of Twitter, the microblogging platform has an ambivalent relationship with its numerous bot accounts (to say the least). This ambivalence creates constant insecurity about whether automated accounts like GPGB may be able to function in the future. F’xa, as a standalone website, appears better protected against such influences, although the bot will likely require maintenance to keep it aligned with browser updates. Thus, while rule-based communication may be more immune to discursive interference and normative corruption, feminist bots are still dependent on the usage conditions of major platforms and web tools, potentially interfering with their feminist activism due to ‘bot rot’ and technical malfunctioning.
F’xa is the only standalone application we analyse and also the only bot explicitly reflecting (on) intersectional insights. This is also related to the Feminist Internet Collective’s opposition to ‘big tech’ and their involvement in initiatives for broader feminist tech infrastructures (see also Mauro-Flude and Akama, 2022). Its automated communication refers to diversity beyond designers of different genders, and criticises bias in the accessibility of AI technologies such as voice assistants, highlighting race and ethnicity. As shown above, Beta and GPGB focus on political struggles relevant to women more broadly. Intersectionality, in all its complexity, still appears to be a challenge for feminist bot design. Intersectionality might be communicatively explained by a chatbot, as in the case of F’xa’s reflection on bias in AI systems. Yet, it remains unclear how intersectionality can be considered as a design choice for building technology. In other words, how can designers ensure that the bot is as accessible and as beneficial to users who experience different privileges and disadvantages due to the intersection of factors such as race, class, sexuality or gender? Authors working on algorithmic bias have questioned to what extent technology, and specifically bots, do in fact account for issues of intersectionality. Arora (2019) points at a ‘disjuncture between intent and practice’ (p. 37). Feminist bots aim to counter gender bias by (digital) design, yet Arora highlights that [t]he challenge arises even prior to the technical embedding, as we need to decide whose version of feminism gets to be inscribed. By default, Western notions of feminism become the normative value system imposed on the rest of the world. (p. 37)
What is more, this raises the question to what extent these chatbots perpetuate largely Western ideas of gender and power (see also Milan and Treré, 2019). Partly due to their dependency on platforms controlled by ‘big tech’ corporations, they struggle with challenging the concentration of tech power in the Global North/Minority World. It should, however, be noted that F’xa does reflect on related issues to some extent, for example, by commenting on English as a dominant language for AI training. The chatbot inter alia points out that this leaves non-native speakers at a disadvantage due to accents or even renders them unable to use certain services/technologies. Yet, feminist chatbots’ consideration for issues relevant from an intersectional perspective appears still to be implemented rather superficially, for example, by addressing diversity among bot designers. This is the classic liberal feminist approach—involve people from diverse backgrounds in the design and engineering phase, but do not question the ways in which values are built into bots (and other technologies). Thus, while we argue that bots indeed can have feminist politics, such automated communication not only simplifies intersectional issues but also risks representing already dominant versions of feminism.
Conclusion
Automation will remain a feminist issue. We have shown how feminist chatbots aim to engage users in both technology critique and technologically facilitated (data) activism. We have reflected on the potentials and limitations of feminist activism through bot design—which also speak to challenges to be addressed by feminist automation efforts in the future. To support our analysis conceptually, we linked our analysis of three chatbots to technofeminism (De Hertogh et al., 2019; Wajcman, 2004) and intersectionality (Cho et al., 2013; Crenshaw, 1989). Technofeminism highlights that technology is not inherently patriarchal but that it can be used to represent and promote the interests of marginalised groups. Intersectionality, which evolved in the context of Black feminism and critical race theory, emphasises that overlapping identities beyond sex and gender, including race, ethnicity, class, sexuality, age and physical capabilities, yield layered effects of discrimination and privilege.
Building on technofeminism and intersectionality, we highlight that automation as a feminist issue should not mainly evolve around gender inequalities but also consider technologies in relation to inter alia race, class and sexuality. We argue that feminist chatbots make a start at exploring alternative visions for automation, defending the interests of otherwise marginalised groups: by potentially engaging users in automated communication on technofeminist and (to some extent) intersectional concerns regarding automation as such, and by facilitating their involvement in feminist (data) activism for women’s rights. However, the economic and sociotechnical conditions for their production and use limit possibilities for discursive engagement with users. As most of these chatbots are rule-based, there is little space for actual debate and critical, human engagement. In addition, we noted limitations concerning the extent to which issues of intersectionality can be translated into the design of bots and automated communication more generally. These limitations arise because of the power of ‘big tech’ in providing the platform infrastructure on which most of these bots depend.
Our analysis thus yields two main findings. First, if we accept that automation has politics (Winner, 1980), then technologies of intersectional feminist politics are certainly among them. To what extent such politics have actually been incorporated in automation and automated communication is another question. Starting from the argument that technology and society have still much to learn—and to gain—from feminist values, designers have engaged with the issue of what feminist automation might look like. Such design explores how feminists’ political struggles can be translated into computational objects and infrastructures. It is, however, a niche development, and intersectional technofeminism is still far from having much impact as a principle for ‘mainstream’ automated communication. Second, even within this niche, intersectional considerations may end up being implemented rather superficially, for example, by addressing social diversity among bot designers. Thus, while we argue that bots indeed can have feminist politics, the designers of feminist chatbots still face challenges, notably in avoiding a simplification of intersectional issues, moving beyond representations of already dominant versions of feminism, and addressing the structural constraints imposed by the platforms and infrastructure dominated by private ‘big tech’ companies.
Supplemental Material
sj-docx-1-nms-10.1177_14614448241251801 – Supplemental material for Feminist automation: Can bots have feminist politics?
Supplemental material, sj-docx-1-nms-10.1177_14614448241251801 for Feminist automation: Can bots have feminist politics? by Annika Richterich and Sally Wyatt in New Media & Society
Footnotes
Funding
The author(s) received no financial support for the research, authorship and/or publication of this article.
Supplemental material
Supplemental material for this article is available online.
Notes
Author biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
