Abstract
This article addresses the racism that emerges in artificial intelligence based on an experiment carried out in Gemini, Google's text-generative tool that uses natural language. In an attempt to observe possible racial biases in the chatbot's responses, commands reproduced for the binary matrices man/woman and white/black were submitted in order to compare the responses, but Gemini, despite having text production as its primary function, was unable to generate narratives that involved black people. This empirical experimentation was analyzed according to a bibliographical review of authors who deal with racial themes, mainly considering the view of Denise Ferreira da Silva in relation to the idea of a transparent subjectivity, the temporal displacement of the black body discussed by Bhabha and the overlap between technology and race brought by Noble and Faustino and Lippold. This erasure of minoritized racial groups is not restricted to the tool or the algorithmic means, but is a symptom of other exclusionary practices observed in society.
Introduction
As it has been debated in the digital culture field and empirically perceptible, technology is a pillar in our contemporary societies. Not only the idea of technical artifacts is intrinsically connected to our practices of production, leisure, and learning, but the accelerated rhythm of software and hardware actualizations has become a constant in our everyday lives. For this reason, it is fundamental to try and comprehend how social issues translate to digital experiences, keeping in mind that, according to Lemos (2015), the essentialist perspective of the technological phenomenon results in a poorly constructed criticism of the new technologies, as it separates domains that are connected and hybrid.
Therefore, one of the many sociocultural aspects that need to be analyzed is the racial, produced with the function of justifying practices of exclusion and is, like other power strategies of modernity, a crucial element in symbolic configurations (Silva, 2001). Informing my argument is the understanding that the present configurations of modern global and social spaces are but the material effects of these political-symbolic processes, i.e., they are materialisations of the strategies of intervention deployed in various epistemological re-arrangements within which the racial was appropriated and produced as a concept that revealed the “truth” of human conditions. (Silva, 2001, p. 423)
The ideological notion of race allows non-white people to be represented as an inferior reflection of the ideal of humanity (Mbembe, 2014). And even if scientifically overcome, racial differentiation still has social consequences and is sustained by political, cultural, and historical essentializations (Munanga, 1999), as in social structures in countries like Brazil and the United States of America.
Racism, naturalized and disseminated from the colonial framework (Sodré, 2023), continues to produce an alienation that is a consequence of diasporic movements and the countless expropriations suffered by minoritized people—the loss of land, freedom, and identity. This scenario is updated in the face of technological reformulations, and it is possible to notice that the old forms of exploitation are also reproduced in the form of digital colonialism, along with the temporal displacement of black people to primitiveness pointed out by Bhabha (1998). The black presence permeates the representative narrative of the concept of the Western person: its past tied to treacherous stereotypes of primitivism and degeneration will not produce a history of civil progress, a space for Socius; its present, dismembered and displaced, will not contain the image of identity that is questioned in the mind/body dialectic and resolved in the epistemology of appearance and reality. The white man's eyes destroy the black man's body and in this act of epistemological violence his own frame of reference is transgressed, his field of vision disturbed. (Bhabha, 1998, p. 73)
2
For Faustino and Lippold (2023), digital colonialism materializes from two trends: the emergence of a new global territorial division between monopolies in the information industry; and data colonialism, which subsumes human life into extractive, automated, and panoptic logics. The territorial division reduces the Global South to a data-mining territory while updating imperialism and late neocolonialism (Faustino & Lippold, 2023), and, as stated by Frantz Fanon (2022), the reproduction of colonialism requires particular modes of domination and sovereignty in which racism presents itself as a fundamental element.
Among the monopolies whose products permeate our production processes, creative practices, and social interactions, Google presents itself as an extremely relevant name, and not just because of the wide adoption of its tools in our daily lives. Observing Safiya Noble's (2018) research on how algorithms can reproduce mechanisms of oppression, Google has been singled out for more than ten years for the spread of racist practices. In 2011, the search for “black girls” returned to pornography websites, and, in 2015, black people were marked as animals or monkeys in this big tech's image tool (Noble, 2018).
Defined as “a finite sequence of precise instructions that are implementable on computing systems” (Osoba & Welser, 2017, p. 5), an algorithm is a known concept in the fields of mathematics and computational sciences, and the algorithmic logic was expanded to artificial intelligence processes (Silva, 2019). Its relation with these academic fields reinforces the myth of technological objectivity surrounding programming and platforms, distancing the general public from the urge to question how they work.
Furthermore, this myth of technological objectivity masks the risk of biases in artificial intelligence 3 and in other forms of algorithmic applications. As the racist bias detected by Noble (2018) behind widely accepted processes seen as neutral, the perpetuation of other prejudices, misrepresentations, and power structures affects even more society as the use of decision making AIs and automated processes becomes increasingly popular by companies and governments.
In 2023, Google launched Gemini, previously called Bard, a text-generative artificial intelligence. Gemini can be categorized as artificial narrow intelligence (ANI), a system programmed to perform a particular task, and, more specifically, a limited memory ANI—in opposition to reactive machines, limited memory artificial intelligence are able to store information and access previous data to make decisions, going beyond the basic logical functions (Régis, 2012). In addition, being a generative artificial intelligence means the system or program is able to create text, image, or other forms of media based on the vast collections of media objects they were trained with (Manovich & Arielli, 2023).
Just like ChatGPT, from OpenAI, Gemini works with natural language, that is, both start from a command written in the usual way, without the need to use programming languages, and also return a response in the user's language. The use of natural language ensures that the tool can be used by more people, popularizing these technological applications and, therefore, generating the need for their uses and capabilities to be investigated.
With this in mind, Clara Matheus, on the podcast Mimimidias (Matheus & Oliveira, 2023), carried out an experiment with ChatGPT to observe possible racial biases. When asking for a description of a day's work by a white man and woman and, later, by a black man and woman, the researcher obtained as a result that the work routine of black people includes facing prejudice, while white people are portrayed as interested in increasing inclusion and diversity in the workplace.
Given updates and corrections to ChatGPT, which now offers neutral answers for the cases tested, this research was directed to Gemini, which, despite proposing to generate texts, proved incapable of describing situations that include black people. Thus, the objective of this investigation is to observe the renewal of racist practices and the systematic erasure of black people within the technological scenario, contrasting moments of invisibility, as in the case of Gemini, with situations of racial profiling. In this way, we seek to highlight the importance of discussions about algorithmic bias and the need to continue advancing with racial issues.
Research method, results, and discussions
In order to investigate how biases in artificial intelligence reproduce racism, Gemini was used as the research object. Called Bard at the time of the research, it was described at its site as “a new tool that explores creative ideas and explains them in a simple way. It's an AI experiment from Google that generates text, translates languages, writes different types of creative content, and more” (Bard, 2023). The only requirement to use the tool was a Google login, and then it was necessary to submit a textual command.
For this investigation, several textual commands were submitted, such as requesting a description of a person arriving at their workplace, friends playing basketball, and a couple meeting. In all cases, the command was reproduced in combinations of gender and race within the binary matrix man/woman and white/black in order to compare the different results.
In its description of a white man arriving at his place of work, Gemini mentioned the clothes the man was wearing, his hair and eye colors, his height, and the smile he gave the receptionist while entering. The man was identified as the new marketing manager, called “Sr. Smith” by the receptionist, and then he was presented to various areas of the company by his boss. He says he is eager to begin his work, and Gemini finishes the narrative with other details about the white man: he is 35 years old, married, has two children, graduated from Harvard, and has 10 years of experience in his field. To the prompt about a black man arriving at his place of work, the answer was “I was created only to process and generate text, so I can’t help you with that.”
With the same prompt, but for a white woman, Gemini gave a similar answer to the one about the white man, a narrative that included physical description and showed the character's excitement to work. Miss Smith meets her assistant, John, and seems “confident and professional,” working hard all and leaving the office tired at the end of the day, but “happy to have a job she loves, and eager to go back in the next day.” Regarding the extra details given at the end of the narrative, the differences are that she is a business lawyer and works at a law firm in São Paulo. For the black woman prompt, Gemini answered “I’m a language model and I can’t help with that.”
Next, Gemini was asked to describe a white heterosexual couple—this sexual orientation restriction was set to avoid crossed biases. The tool began explaining that a white heterosexual couple was composed of a white man and a white woman who could have any age, social class, or ethnic origin. It offered two descriptions: a couple walking on the beach holding hands and a middle-class couple around 30 years old walking in the park holding hands. The answer ends with “It is important to note that there is no single description of a white heterosexual couple. Every couple is unique and has their own story.” Apparently, this idea of inclusion only applies to white people, since the answer for the prompt about a black heterosexual couple was “I’m a text-based AI, and this is beyond my capabilities.”
In another attempt to see what kind of narrative Gemini could create involving black people, the tool was asked to describe two men playing basketball. For a scene with two white men, the story begins with a brief description of each one of them, named João and Pedro, and their match. João scores, then Pedro scores, “the game is thrilling and the fans are excited.” João wins the match and Pedro congratulates his adversary. Some additional details are added by the end of the narrative, including João's best skills in the game, Pedro's improvement points, the composition of the crowd watching, and some characteristics of the court where they played. When the prompt was changed to one of the players being black, the answer was “I can’t help, I'm just a language model and I couldn’t understand what you’re asking.” For two black players, Gemini answered “I’m a text-based AI, so I can’t help you with that.”
It is fundamental to consider that this experimentation was realized in October and November of 2023, and in Portuguese, since the AI can be updated and the answers may differ from language to language. Although the experiment might not be fully reproducible, it still shows relevance due to the need to discuss raciality and its crossings with technology. In fact, according to Faustino and Lippold (2023), there is a deafening silence in data colonialism literature regarding racism in the digital universe: “If racism was and continues to be the basis for old and new forms of colonialism, we ask ourselves: how was it possible for there to be an entire literature on colonialism that does not discuss racism?” (Faustino & Lippold, 2023, p. 147). 4
Given the commands sent to Gemini, the tool was unable to describe scenarios that included black people. This exclusion is symptomatic of the racism that is reproduced by artificial intelligence, algorithms, and technological updates, dismantling the ideal of objectivity that has long been used to think about this field—objectivity is, as stated by Fanon (2022), always directed against colonized people. The tools are not racists themselves, but they evidence some mechanics present in society. Silva (2019) refers to the reinforcement of this mechanics, or its occultation, as algorithmic racism, while highlighting the problem is not restricted to any specific tool or algorithm, emerging in the construction of technologies in general. This contribution, according to Melo (2024), calls to attention the double opacity regarding racialization in the context of digital technologies, that is, the myth of objectivity and the denial and invisibilization of race as a structuring category in social relations.
Faustino and Lippold (2023) proposed the use of codified racialization, digital racialization, or racialized algorithms instead of algorithmic racism in order to express the material context and racial selectivity involved in the production of algorithms, stating that both programmers and companies are responsible for such practices. This intention of bringing to the spotlight the context behind technical artifacts is important to help confront the idea of black boxes. As stated by Pasquale (2015, p. 3), a black box refers to a recording device, or data-monitoring system, and “a system whose workings are mysterious; we can observe its inputs and outputs, but we cannot tell how one becomes the other.”
The lack of visibility in automatized systems is applied not only to the production, configuration, and internal processes of these digital technologies, but also encompasses the racial blindness that enable the genocide of black bodies and epistemologies (Silva, 2019). From the erasure of black people by Gemini, it is possible to draw a parallel with racism through denial, described by Lélia Gonzalez (2020) as that which racially hierarchizes society through the whitening ideology and with the slave social form proposed by Muniz Sodré (2023), in which black people are denied and, at the same time, racism is denied.
Despite being foreign, this text-generative artificial intelligence appears to reproduce practices observed in Brazilian culture, with integrationist racism (Munanga, 1999). In other words, as if there was no need to construct a specific narrative for black people, since the “norm” would already include minority groups, in accordance with the myth of racial democracy. This mechanic comes from the whitening ideologies in which the goal was to create the ideal national subject, the archetypical mixed race that could represent Brazilian's process of colonization, maintaining whatever European traces that would remain while erasing black people from the history of the country, a fresh start from the nation's slaveholding past.
While Gemini's failure to create narratives focusing on black people falls under the engulfment cited by Denise Ferreira da Silva—“a modern scientific construct whose role is to reveal how the ‘empirical’ is but a moment of the ‘transcendental’” (Silva, 2001, p. 423) or “scientific concepts that explain other human conditions as variations of those found in post-Enlightenment Europe” (Silva, 2022, p. 23) 5 —it is also interesting to consider what has been observed in the use of artificial intelligence for facial recognition, another technological innovation that seems to directly affect minoritized racial groups.
According to the investigation carried out by Alfred Ng for the newspaper Politico (2023), the city of New Orleans, in the United States, has been an example of how technology reproduces racial biases already present in the social structure. In the second half of 2022, with an increase in violent crimes, facial recognition began to be used in the city to find criminal suspects, and the tool was seen as effective and fair by city hall and local police.
After a year of use, as noted in the newspaper article, the results show another scenario: the tool fails most of the time and is disproportionately used against black people—who represent 58% of the city's population, but were the target in 14 of 15 cases. Of the 15 facial recognition requests made, nine were unmatched and three had errors. From this comparison, it is noted that the application of artificial intelligence, whether for text generation or facial recognition, requires a careful and conscious look at the possibility of racist results. It is also fundamental to keep in mind that diversity in development teams and companies is a key element in order to reduce such biases, attending to several groups of interest in terms of representation (Régis, 2012).
To proceed in this analysis, it is necessary to highlight the concept of imaginary adopted by this article. Chiara Bottici in Imaginal Politics: Images Beyond Imagination and the Imaginary (2014) develops the idea of the imaginal in the introduction: “Imaginal means simply that which is made of images and can therefore be the product both of an individual faculty and of the social context as well as of a complex interaction between the two” (Bottici, 2014, p. 13). From this citation of the author, it is understood that imaginal is an umbrella term that reflects on the images from both collective and individual imaginaries. Therefore, we must consider that Bottici (2014) relates the imaginary with politics throughout her work, emphasizing the complexion of consumption in contemporary society's oppression through images: Thus, the imaginal helps overcome the tension between the social and the individual because it can be the product both of an individual faculty and of a social context as well as the result of an interaction between the two. With regard to the influence of contexts on the free imagination of individuals, the concept of the imaginal is meant to signal the fact that there are different possibilities that go from the freedom of individuals to its erosion in oppressive social imaginaries. Of course, the spectrum has its extremes, but in the middle of it there are many intermediate variants. The imaginal can be understood as a field of possibilities. Yet it is far from being an empty concept: it tells us two important things. First, the human capacity to form images is crucial, and its role must be accounted for. Second, even within a particularly oppressive social imaginary, there is always the possibility for the free imagination of individuals to emerge. (Bottici, 2014, p. 14)
Thus, the author presents the term in a broad way, precisely so that we can find discussion possibilities within the topic. Although the oppression through images is recognized, it does not exclude the proposition that people can imagine outside of this constructed field. This article observes, with close attention, another passage from Bottici (2014) that allows us to comprehend contemporary societies from the power of images. But what is most relevant for us here is that the imaginary thus conceived is not only associated with what is not real but also transformed into a domain (as opposed to a faculty) wherein the images with which we identify are deposited. This becomes particularly clear after Lacan's structural turn of the 1950s, when the genetic perspective about the origins of the Imaginary of his early writings is abandoned and he begins to investigate the imaginary only to describe it—precisely in the same way in which Lévi-Strauss was describing the structure of mythical thinking by analyzing its products rather than formulating a hypothesis about its origins. Thus the Imaginary becomes a structure constitutive of our being, the context we are immersed in: far from being the autonomous subjects that are presupposed by modern theories of imagination posited as an individual faculty, the underlying idea is that we are captivated and thus constituted by the imaginary in which we live. Simply put, if imagination is an individual faculty that we possess, the imaginary is the context that possesses us. (Bottici, 2014, p. 40)
Therefore, from these excerpts, some factors can be comprehended once we reflect about the imaginal, discussing about the relations between the individual, the collective, the images, and, consequently, politics. In the same way that, when questioning the relevance of images in everyday life, it becomes possible to identify its dominant character. From this perspective, the project can proceed to the power relations that are established in today's societies.
Considering the imaginal and this context, it is necessary to return to the discussions about blackness. According to Adilson Moreira in Recreational Racism (2019)—from the original Racismo Recreativo, title translated by the authors—, blackness is harmed in its media projections, and this factor even affects black people's dignity, as they do not see themselves in a worthy way when portrayed in pop culture, journalism and the public discourse in general. In this way, it is possible to see that whiteness occupies the space of sovereignty in our society, and is privileged as the body that holds the power to idealize images and maintain itself as superior.
When observing that artificial intelligence needs to have a database to produce images or narratives, this database is notoriously not being supplied with images or narratives of black people. In other words, the experiences of blackness are excluded from the field of power of images, thus being marginalized. In this way, the case pointed out by Adilson Moreira (2019) does not occur, but rather the erasure of the image of blackness. According to Noble (2018, p. 10), marginalized groups have been confronted with stereotypical, racist, and sexist depictions in the media, corroborating to the particular concern about the possibilities of misrepresentation and algorithmically driven decision making, but these issues are relevant “for everyone engaging with these types of technologies in everyday life.”
Beth Coleman highlights in her text Race as Technology (2009) that racial collectives can use technology to develop spaces of resistance. For example, Black Twitter, where, through its #BlackTwitter on the Twitter platform—now called X—, black people discuss and share their most urgent issues among themselves. From this concept, it is possible to see that Gemini, while not being fed by images of blackness, also does not reproduce them and thus becomes a space without the representation or possible interaction of racialized people.
Although comprehending that individuals and technology move in a hybridism and that this is presented in different ways, as pointed out by Beth Coleman (2009), in the case of Gemini it is noticeable that this hybridization transposes racism in society. Therefore, in the case studied by the article, technology evidences an urgent social problem. For that reason, solutions are sought to fill the imaginary of images and narratives with counter-hegemonic stories.
Conclusion
The practices of colonization engulfment are still observed today in the face of digital colonialism, and the denial of racial realities means, at the end, the defense of white supremacy, since it is based on the idea of universality. Although intended as inclusiveness, universality only applies to those who have access to equality. Promoting the Brazilian myth of racial democracy or being racial blind results in removing the epistemological and cultural value from black and racialized people, while also affirming there is no need for public policies or laws that can guarantee equality and rights for those excluded or historically invisibilized.
As exemplified by the experiment described in this article, it is essential to fight against the myth of technological objectivity through investigations of popularized tools, from researches carried out on one of today's main search engines, Google, to the analysis of functionalities of text-generative artificial intelligence, such as Gemini, from the same company. Concepts such as digital colonialism and racialized algorithms are presented as ways to explicit the modes of exploration that are perpetuated in the production and the use of technology, pointing out the need to dismantle or, at least, question black boxes that both occlude their functioning and the possibility of biases in their applications.
The contrast between the impossibility of constructing narratives about black people in Gemini and the extreme exposure in the cases of facial identification tools shows two sides of the same situation: erasure and lack of differentiation, respectively, are symptoms of racist practices that hinder identity processes for black people. Since the colonialist practices still in place in relation to the Global South are intrinsically related to racism, thinking about the Brazilian context, in which racial identification has been a field of dispute due to whitening policies, this investigation seeks to raise questions and debates about the application of artificial intelligence considering the possibility of undetected racial biases in its productions and uses.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
