Abstract

This article is a part of special theme on The State of Google Critique and Intervention. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/stateofgooglecritiqueandintervention
What questions should we ask of Google's Autocomplete suggestions? This article highlights some of the key ethical issues raised by Google's automated suggestion tool that provides potential queries below a user's search box. Much of the discourse surrounding Google's suggestions has been framed through legal cases in which complex issues can become distilled into black-and-white questions of the law. For example, do Google have to remove a particular suggestion and do they have to pay a settlement for damages? This commentary argues that shaping this discourse along primarily legal lines obscures many of these other moral dimensions raised by Google Autocomplete. Building from existing typologies, this commentary first outlines the legal discourse before exploring five additional ethical challenges, each framed around a particular moral question in which all users have a stake. Written in the form of a commentary, the purpose of this article is not to conclusively answer the ethical questions raised, but rather to give an account of why these particular questions are worth debating.
Autocomplete's suggestions are not simply a mirror of what users are typing into Google's search bar. Google's official statement is that “Autocomplete is a time-saving but complex feature. It doesn’t simply display the most common queries on a given topic” but “also predict[s] individual words and phrases that are based on both real searches as well as word patterns found across the web” (Google, 2022). Both its underlying methods and associated terminology have changed throughout time, shifting between providing completions, suggestions, and predictions. In doing so, the grounds for potential critique are ever-changing, which means that Google's approach to Autocomplete deserves significant scrutiny.
Legal objections: Defamation and liability
Questions about the content and perceived influence of Google Autocomplete have been raised through legal disputes, specifically legal challenges regarding defamation and liability. Accusations of defamation arise in response to Autocomplete suggestions provided for specific names that imply a negative association. For example, suggestions of [fraud], [scam], [bankruptcy], following the name of a company or person. These have led to a large number of court cases where claimants have sued Google for damages to their reputation (Daskal, 2019). A key debate revolves around whether Google should be legally liable for the values implied in its suggestions (Karapapa and Borghi, 2015; Cheung, 2015). In practice, not only have courts around the world reached “different conclusions as to the liability of the search engine, the logic they follow in reaching a judicial outcome varies substantially to an extent that no solid judicial trend can be assumed” (Karapapa and Borghi, 2015: 263). Some courts have framed Autocomplete suggestions as (algorithmically produced) expressions of thought that can be defended on the grounds of free speech, while others have emphasised the automation involved, in order to treat suggestions as a technical process. 1 Because legal perspectives have dominated the discourse, automated suggestions are often described as either reasonable or unlawful. However, beyond this, there is a range of ethical questions raised by legally permissible suggestions that have significant consequences for public discourse. This commentary raises five of these urgent questions.
Should suggestions avoid reframing the stakes of an enquiry? the issue of reframing
Any kind of suggestion will influence a user to some degree. As Boaz Miller and Issac Record argue “if a user looks at the screen, she can’t help but see the autosuggestions, and these impressions can affect her inquiry” (2017: 1949). Autocomplete suggestions frame how to consider particular ideas and their associated values. Even if a user's query does not change, this “involuntary exposure” may establish or reinforce even “unwanted beliefs” (1949) about the topic at an unconscious level. However, if a user does follow a suggestion, how significant is the degree to which Autocomplete's suggestion changes the results page? For example, in 2016, journalist Carole Cadwalladr reported that Google's top Autocomplete suggestion for [Did the Holocaust…], was the suggestion [Did the Holocaust really happen?]. Importantly, following the suggested query significantly altered Cadwalladr's results page compared with the search [did the Holocaust], to include Holocaust denial websites, such as “Top 10 reasons why the Holocaust didn’t happen” (Cadwalladr, 2016; see Mulligan and Griffin, 2018). Therefore, Google's suggestion had fundamentally reframed the stakes of the original query and led the user to a range of new sources, including proponents of Holocaust denial not provided in the results for their original query. Therefore, there is a distinct ethical dimension regarding the consequences of where Autocomplete's suggestion leads.
Therefore, should we value the degree to which Autocomplete suggestions change a results page? As users, are we comfortable with suggestions that reframe queries, rather than just refining them? And if not, what about suggestions that could reframe queries in a positive way, for example, restating a Holocaust denial query in language that would lead to more mainstream sources? Such questions rely on our individual perspectives regarding whether we think Google should facilitate the intentions of users, or aim to reinforce a particular set of values. The epistemological and ethical function of Google's results has been discussed widely in search engine studies (see Halavais, 2018: 35, Introna and Nissenbaum, 2000: 169, and Hillis et al., 2012: 182), but such discussions rarely focus on the influence of Autocomplete in determining a user's query.
Should suggestions be personalised or contextualised? the issue of individualised suggestions
On its release, Google Autocomplete provided the same suggestions for everyone. However, in 2009, Google started to personalise and contextualise Autocomplete suggestions (Kadouch, 2009). This means that suggestions depend on a range of personalised factors such as location, previous search history, and other profiling data. Previous research has concluded that direct personalisation of search results is not as significant as Google's personalisation of advertising, but there is very little evidence regarding the degree of influence that personalisation has on Autocomplete suggestions. Jutta Haider and Olof Sundin explain this in the following way: “searches are definitely personalised in terms of geo-location of the searcher […] and while this makes a lot of sense if we consider the billions of results a simple search for pizza generates, this can potentially also be highly problematic [as] there are always assumptions about what type of people are residents of a certain town or area, and data to describe a location often includes average income, political leanings and so on.” (2019: 65)
What kinds of suggestions could be considered politically neutral? the issue of political neutrality
Google has a range of exemption policies for Autocomplete, in which suggestions are disabled for special categories of query. One key category, added in 2020, relates to “predictions that can be interpreted as: a position for or against any political figure or party” (Google, 2022). This policy is a response to long-standing journalistic claims that Google favours certain political parties over others in their suggestions. This accusation was particularly influential during the 2016 US Presidential Election, outlined in Caplan (2016). A key ethical question is, therefore, what are the political implications of Google's exemptions policy? Disabling Autocomplete suggestions does not stop the tool from influencing election politics. First, disabling suggestions for the names of high-profile politicians does not lead to neutrality and instead obscures potentially relevant lines of enquiry, such as a candidate's previous actions or offensive statements. Doing so de-emphasises these issues and adds to the normalisation of extreme candidates in mainstream politics. Second, exemption policies ignore the fact that every suggestion is inherently political. Google Autocomplete still makes suggestions for topics that contribute to political attitudes such as: [are immigrants…], [abortion is…], and [legalising…] that can actively shape a user's line of enquiry and influence their political decision-making. Due to how suggestions are personalised and localised it is difficult for researchers to provide a survey of such topics, let alone study the kinds of political attitudes that might be embedded in aggregate. In this regard, the ethical challenges raised by political suggestions cannot simply be addressed by Google's current approach of preventing suggestions for the names of politicians. Instead, a wider public conversation is required about the consequences of exemption policies and what topics, if any, should be exempt.
What should be done if suggestions reinforce biases? the issues of group stereotypes and aggregated discrimination
Safiya Noble's Algorithms of Oppression productively documents a range of offensive stereotypes found within Autocomplete suggestions. She highlights a now-infamous set of suggestions collected in 2013 for the query [why are Black people so…], for which Google suggests the words “loud, lazy, rude” (2018: 20). Marc van Gurp similarly documented a range of offensive and discriminatory suggestions for the queries about other collective groups, such as [Jews are…] and [Muslims are…] (2013). Google's response to the publicity surrounding group stereotyping has been to disable suggestions from appearing for particularly egregious examples. However, by focusing on general nouns and hiding these particularly blatant instances, Google is ignoring the far larger range of query suggestions that might embody racist or antisemitic values and their actions make discrimination more insidious.
Group stereotyping is deeply offensive. However, Google's approach of disabling Autocomplete for specific queries simply hides the discrimination within Google's datasets that led to the offensive suggestions in the first place. For this reason, I argue that a second type of stereotyping, which I term aggregated discrimination, represents a more insidious ethical challenge. In chapter 3 of Investigating Google's Search Engine: Ethics, Algorithms, and the Machines Built to Read Us (Graham, 2023), I present findings that show how Autocomplete's suggestions are sexist in ways that might not be clear on an individual level but are overwhelmingly misogynistic when evaluating a larger sample. Drawing on Florence Débarre's 2016 investigation, I discuss the findings of my 2021 study which analysed the Autocomplete suggestions for the names of 2000 people in a variety of professions. The results show that Google's suggestions for women's names include significantly more instances of [husband], [wedding], or [married], regardless of their level of fame or type of profession. In some instances, this difference was more than four times as frequent for female names. These findings show that misogynistic attitudes are embodied in suggestions for real individuals and are not simply an issue for queries such as [women are…]. These suggestions embody sexist beliefs and potentially exacerbate existing misogyny in tandem with the issue of reframing, discussed in issue 1 above, by encouraging users to follow these suggestions, empowering a perspective that considers male colleagues as professionals and female colleagues as mothers and wives.
The ethical question at stake here is whether or not search engine companies have a duty to identify aggregated patterns of discrimination and work to actively combat them. Due to their level of influence, I suggest that they do have this duty, however, many might see this as a form of regulation that simply enforces a different set of biases.
How should users be allowed to influence suggestions? the issue of agency
This commentary concludes by posing one final question, which relates to the initial discussion of legal battles and builds upon the four previous ethical dimensions discussed above: How should users be allowed to influence suggestions? A key ethical challenge of Autocomplete is that its suggestions are constantly influenced by a range of dynamic factors including the changing content of the web, regular user behaviour, and intentional attempts at manipulation. These changes can influence suggestions quickly, due to “what the company calls a ‘freshness layer.’ If there are terms that suddenly spike in popularity in the short term, these can appear as suggestions, even if they haven’t gained long-term popularity” (Sullivan, 2011). All web users unintentionally influence the content of Autocomplete suggestions, but there are also various sources of intentional influence, such as Search Engine Optimisation (SEO), the practice of optimising web pages to influence their ranking. Google actively encourages SEO by offering courses through Google Digital Garage and in the US alone, SEO revenue in 2020 was estimated to be $80 billion (Allsop, 2022). Scholars have argued that “SEO is not (or no longer) understood as an optional method, but as a necessary standard activity for any website that seeks to achieve visibility” (Schultheiß and Lewandowski 2021: 552). Because Autocomplete takes into account “word patterns found across the web” (Google, 2022) SEO's paid-for influence over search result ranking also impacts the nature of Autocomplete suggestions. Therefore, the vast economy of SEO should be considered when assessing the kinds of suggestions that become naturalised. Suggestions are not simply a negotiation between users and Google's algorithms, but are also shaped by the dynamics of wealth and influence that dominate the world wide web.
In addition, many of the legal rulings that have led to the removal of suggestions or changes in policy have depended on either huge private wealth or been led by governmental organisations. But should the average digital citizen have the right to influence the kinds of queries suggested by Google? Throughout the history of search engine studies, critics have demanded that search engines should be treated more like a utility or a public good than as a private product. Back in 2000, Lucas D. Introna and Helen Nissenbaum argued that “web-search mechanisms are too important to be shaped by the marketplace alone” (2000: 176). They go on to argue that search engines must work in the greater public interest toward what they call the ideal web. “This ideal Web is […] a platform for social justice. It promises access to the kind of information that aids upward social mobility; it helps people make better decisions about politics, health, education, and more. The ideal Web also facilitates associations and communication that could empower and give voice to those who, traditionally, have been weaker and ignored.” (181)
So, given that a wide range of stakeholders already influence suggestions, what rights does an individual user have? At present, the only agency given over to users is Google's “report inappropriate predictions” link, embedded in Autocomplete from 2017 (Gomes, 2017). However, this agency is limited, only giving users the power to identify which of Google's narrow content policies a suggestion violates. In doing so, users can only critique Google's suggestions on Google's terms. The company also actively limits social activism designed to scrutinise Google's role in perpetuating bias online, such as ROM's 2011 campaign to “sweeten up the Romanian image” (see van Gurp, 2011). This public campaign highlighted the negative Autocomplete results for [Romanians are…] and aimed to replace the derogatory suggestions by building a semi-automated system on their website that repeatedly searched [Romanians are smart] in a variety of languages. Rather than addressing the underlying issues raised by the campaign, Google simply identified ROM's website as breaching their terms of service and disabled all suggestions for the query [Romanians are…]. Such an outcome might be considered a success by some, but it demonstrates the lack of agency users have to critique the way Google suggests queries and the values that these suggestions imply. The core ethical issue is that users rely solely on Google's discretion as to what kind of influence is allowed and the company has prioritised silencing critics, rather than actively addressing concerns of discrimination and injustice. Just as Google provides us all with its suggestions, the public should have the right to provide them with ours; from our concerns about bias to our attitudes toward personalisation and all the ethical dimensions in between, these suggestions matter most of all.
Conclusion
This commentary aims to both reignite and redirect scholarship regarding Autocomplete, by arguing that there are multiple ethical dimensions raised by Google's automated query suggestion tool. In addition to legal evaluations, more work needs to be done in teasing out the specific moral dilemmas produced by suggestion tools. Some of these ethical dimensions relate to the content of suggestions, such as the discrimination raised in issue 4, while others concern their impact, such as the impact on results regarding Holocaust denial, discussed in issue 1. Some moral questions, including the use of personalisation and contextualisation highlighted in issue 2, relate to broader debates about the design of platforms to either encourage ideological segregation or aim towards a uniform informational experience for users around the world. In practice, most query suggestions raise several distinct ethical issues at once, for example, the questions regarding exemption policies raised in issue 3 have an influence on both the ideas presented in suggestions and the new directions Google's tool may lead users. Finally, issue 5 articulated the limited agency of the public, compared to the economic power of particular stakeholders to influence Google's suggestions, which dictates the future direction of Autocomplete. This commentary has highlighted discrete challenges, which are often overlooked or amalgamated and aims to encourage further research into the various ethical issues raised by Google's Autocomplete suggestion tool.
Footnotes
Acknowledgements
I would like to thank all the attendees of the Google Critique workshop in Vienna, April 2022 for the stimulating discussions and generous feedback given on early drafts of this commentary.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article
