Abstract
In early 2017, a journalist and search engine expert wrote about “Google’s biggest ever search quality crisis.” Months later, Google hired him as the first Google “Search Liaison” (GSL). By October 2021, when someone posted to Twitter a screenshot of misleading Google Search results for “had a seizure now what,” users tagged the Twitter account of the GSL in reply. The GSL frequently publicly interacts with people who complain about Google Search on Twitter. This article asks: what functions does the GSL serve for Google? We code and analyze 6 months of GSL responses to complaints on Twitter. We find that the three functions of the GSL are: (1) to naturalize the logic undergirding Google Search by defending how it works, (2) perform repair in responses to complaints, and (3) boundary drawing to control critique. This advances our understanding of how dominant technology companies respond to critiques and resist counter-imaginaries.
Introduction
Google has long understood the importance of user trust to its success and has devoted significant efforts to bolstering trust. Researchers have studied Google developing systems to reduce spam in search results (Brunton, 2013), creating guidelines for website developers (Badouard et al., 2016), creating artificial intelligence (AI) principles (Greene et al., 2019; Jobin et al., 2019), establishing a quality rating program (Bilíc, 2016; Meisner et al., 2022), and attempting to stand up an AI ethics advisory council (Piper, 2019).
Since 2016, the public has become increasingly mistrustful of the US technology platforms, including Google. Journalists were increasingly documenting harmful search results on Google Search—from Holocaust denial search results (Cadwalladr, 2016b) to misogynistic autocomplete suggestions (Cadwalladr, 2016a). Google had previously weathered controversies around anti-Semitic search results (see the 2004 scandal around the query “Jew”; Vaidhyanathan, 2011), politically controversial topics (see “miserable failure” and “Santorum” ; Gillespie, 2017), and racist sexualization (Noble, 2012, 2018) Former search engine journalist Danny Sullivan (2017a) reviewed criticism of search results from late 2016 and early 2017, calling it “Google’s biggest ever search quality crisis.” The combination of the growing fears over online misinformation and the inflammatory nature of these harmful searches were a perfect storm and the context for a new organizational response for Google. In late 2017, Google tried a new form of intervention—the Google Search Liaison (GSL).
This article explores the work of the GSL to understand Google’s efforts to bolster trust. Google hired Danny Sullivan, the former journalist and search engine marketing expert, as the first GSL in 2017 (months after the article noted above). As we are discussing the activities of this Google role—nominally filled by one person with significant experience and reputation but surely supported, shaped, and supervised by many other people within the corporation’s rules and incentive structures—we will use “they” throughout as a third-person plural pronoun.
The GSL is a different type of organizational response from those noted above—it is a role that Danny Sullivan (2017c) describes as “help[ing] the public better understand how search works & Google better hear public feedback improve.” They welcome concerns and questions about Google Search on Twitter (Schwartz, 2019a, 12:06-12:20) and the GSL publicly shares the @DannySullivan handle (Schwartz 2019b, 16:20) (also included in the Twitter profile of the Google @SearchLiaison account). Unlike other corporate Twitter accounts that request people tweeting complaints about services and products to send a private direct message (Einwiller and Steilen, 2015), the GSL publicly engages with users in the replies of their complaints. 1 This article studies how the GSL responds to complaints and asks: what functions do the GSL serve for Google?
We analyze the GSL responses to complaints about Google Search on Twitter over a 6-month period in 2021. The GSL is exacting about search-related terminology and demands complaints conform to Google’s technical definitions of search engine behavior. They request certain kinds of evidence—screenshots, details about time, and place. The GSL appears eager to help users conform to the limits of search (e.g. suggesting reformulations of queries, telling them how to claim knowledge panels, or moderating expectations). The function of the GSL is to shape the articulation (Gillespie, 2014) of Google Search. We categorize the functions of the GSL as: naturalizing the logic undergirding Google Search by defending how it works, performing repair in responses to complaints, and boundary drawing to contain critique.
Complementing prior research on “how search works” from both technical and social dimensions, our analysis furthers our understanding of how Google articulates and manages expectations of the product and ideology of search. Following the work of Hoffmann (2021), we “register an unease” with Google’s use of the GSL to resist complaints and extend “existing circuits of power and production.” We aim to “denaturalize and reconceptualize how information could be provided to the public vis-à-vis the search engine” (Noble, 2018).
Background
In this study, we examine the organizational response of the GSL to complaints about Google on Twitter. Search is a complex assemblage that extends beyond the algorithm itself to its articulation. The articulation from the company is in its promotional educational material, advertising, legal filings, labels on the interface, and various comments from company representatives. The GSL is one such company representative. Previous research indicates that the responses of the GSL may provide insights into how Google articulates the function and boundaries of search. As Gillespie (2014) explains, we should seek to understand the logics, assumptions, and values embedded in algorithms and “the social process by which it is made into a legitimate system.” This requires, he notes, looking at how algorithms “fall apart and are repaired when they come in contact with the ebb and flow of public discourse” (Gillespie, 2014).
Google Search Liaison
The GSL role is created by Google and shaped by organizational values and the identity of the person who was recruited 2 for the role, Danny Sullivan, reveals something about the functions the GSL is to perform. Danny Sullivan is brought into Google to do the work between searchers and Google—fostering and restoring trust via pseudo-transparent responses, explanations, and guidance. He is a public face of Google Search on Twitter, largely using his personal account and based in Google’s search team rather than Google’s communications and public affairs team (Schwartz, 2019a, 12:20). Google presents him, in the role of GSL, as the explainer-in-chief, as the “exclusive epistemic authority” (Cotter, 2021) on questions of what Google is and could or should be.
Sullivan has been described as the “father” of search engine marketing 3 —by the time, he retired as a journalist in 2017, he had been writing about search, and search engine optimization (SEO) (or “search engine marketing”) for over two decades (Schwartz, 2017). His body of work includes articles identifying where Google failed or fumbled (such as in the case of “did the holocaust happen”). In addition to his work as a journalist, he founded multiple publications writing about SEO and organized conferences for the SEO industry.
The role of the GSL has neither the legal structures of an ombudsperson nor the professional norms of a journalist. However, the title of the role, GSL, and the background, reputation, and manner of engagement of Sullivan allude to such notions of transparency, accountability, and public service. If Google were to recruit someone else for role, that person may bring quite distinct characteristics or behaviors that would then reveal different aspects of Google’s approach to the articulation of web search. As it is, Google recruited Sullivan and has maintained him in this role for nearly 5 years.
When Sullivan tweets in response to searcher complaints, he does so from his personal account, @dannysullivan (the @SearchLiaison Google-branded account tweets links to press releases, explainer-threads, and statements rather than 1:1 user discussions).
4
The use of his personal account also appears designed to suggest independence. Sullivan describes his “personal mission statement” as the GSL
to provide reasonable explanations as issues come up . . . Not as an excuse but to help people understand why something happened. If something has gone wrong, we explain why it went wrong. Otherwise, people assume things that didn’t happen. It’s about taking ownership over an issue that comes up, understanding how we’re going to improve it, and then actually improving it. (D’Onfro, 2018)
Complaint
Searchers’ complaints and the GSL responses provide insights into organizations and systems of power. As Ahmed (2021) explains, “Complaint provides a lens, a way of seeing, noticing, attending to a problem in the effort to redress that problem” (p. 24). Instructive for our analysis is Ahmed’s point that “to hear complaint can be to hear that silence: what is not being said, what is not being done, what is not being dealt with” (p. 7). As Ahmed writes, “We learn how institutions work,” what she calls “institutional mechanics, by how complaints are stopped,” and “not only how institutions work but how they are reproduced.” We follow the “productive potential” of complaint, “contending with the possibility that complaints,” with organizational responses, “may surface what was previously kept invisible” (Parikh, 2018).
Breakdown and repair
Complaints to the GSL can be viewed as claims of algorithmic breakdown and at times reflect efforts to inform or participate in the repair of the Google search engine. As in the example discussed in the work of Mulligan and Griffin (2018), where Holocaust-denying search results were highly ranked by Google for a seemingly simple query about whether the Holocaust happened, many of the searcher complaints describe breakdowns or failures (Akrich, 1992) arising from the technical artifact’s function failing to meet public expectations of search result trustworthiness.
We look at the organizational response through an examination of the discourse between the GSL and the complaining user. Looking at user complaints through the lens of breakdown is instructive, it exposes fissures in performance and public expectation of it. We are interested in the politics of the organization’s efforts to respond to those complaints. Using the lens of repair unpacks how the GSL’s responses to complaints redirect, legitimating some and delegitimating others, and assigns responsibilities for repair to particular parties. Like complaint, focusing on moments of breakdown (Bowker and Star, 2000) makes previously hidden logics and assumptions of systems visible for interrogation.
Paired with the complaint and breakdown in our research is the organizational response. We look at that response as potentially a type of repair work (Jackson, 2014), focusing on “the world-disclosing properties of breakdown, and the distinct epistemic advantages that can follow from moving repair (and repair workers) to the center of our thinking” (p. 226), recalling that “repair is not always heroic or directed toward noble ends, and may function as much in defense as in resistance to antidemocratic and antihumanist projects” (p. 233), and how “attention to maintenance and repair may help to redirect our gaze [. . .] to moments of sustainability and the myriad forms of activity by which the shape, standing, and meaning of objects in the world is produced and sustained” (p. 234). These complaints represent “critical moments” for ordering, governing search through an organizational response of “reflexive coordination” (Hofmann et al., 2017). We look at the repair or ordering of search as sociotechnical imaginaries “dominated” by company rhetoric (Mager and Katzenbach, 2021), rather than “by algorithms” (Mager, 2018).
Logics of Google
When Google evaluates search quality, they do not frame their criteria as journalists do, in terms of fairness and representativeness. Rather, the logics of search engines frame quality in terms of customer satisfaction and relevance (Van Couvering, 2007). Though it increasingly appears that search engines, or at least Google, may be pursuing some opaque self-chosen notion of “societal relevance” (Haider and Sundin, 2019; Sundin et al., 2021). People trust that Google surfaces the relevant content toward the top of the search results page (Pan et al., 2007; Tripodi, 2018), partially due to Google’s own marketing and documentation (Vaidhyanathan, 2011). Tripodi (2022) argues that in the last decade Google has shifted its design from the “simple list of hyperlinked websites” to various rich-content elements that seem to conflate exploratory search with “queries focused on fact retrieval or verification” (pp. 115–116).
Our study builds on previous research on what searchers and Google itself expect from search. When searchers describe their expectations of Google, it often involves notions that Google should be “neutral” (Odlyzko, 2009), that they trust Google to surface the best result at the top of the search result page (Pan et al., 2007), and view rankings as recommendations and even assertions of claims contained in top-ranked results (Narayanan and De Cremer, 2022). Whereas, Google views itself as an index “to organize the world’s information and make it universally accessible and useful” and attribute significant roles to searchers in driving query selection and parsing of results.
As Mulligan and Griffin (2018) point out in their study of the algorithmic breakdown around the search results for “did the holocaust happen,” unpacking conflicting visions of web search engines help reveal why while viewing the search results as a problem, Google did not consider them evidence of a problem with the logic of search (the Holocaust-denying top result was “relevant”). The problem was identified as one of query formation and limited responsive content. However, for decades, others have been pointing out foundational issues with commercial search engines. Introna and Nissenbaum (2000) critique the assumption that a single set of commercial logics should uniformly dictate the policies and practices of web search and argue doing so would lead to “systematic inclusions and exclusions.” Hoffmann’s (2016) study of Google Books explores how the feature, which includes a substantive search function, does not fulfill its promises toward social justice and information access, but rather exacerbates existing inequalities. Noble (2018) uses case studies on Google Search to describe “algorithmic oppression” as the structural ways that racism and sexism are fundamental to the operating of the web, rather than dismissing instances of discrimination as “bugs” or “glitches.”
Search as a site of strategic action
To understand search engine logics, it is important to unpack the actors in the ecosystem. This includes the search engine company and their employees (Van Couvering, 2007), contract search quality raters (Bilíc, 2016; Meisner et al., 2022), web spammers (Brunton, 2013), those involved in governance capacities (Mager, 2018), and those suggesting or manipulating the queries searched (Tripodi, 2022; Golebiewski and boyd, 2018). Users, as searchers, also have an active role to play in the search ecosystem (Grimmelmann, 2013). Gillespie (2014, 2017) reveals how information providers that source search results (ads and organic) also take action in making themselves “algorithmically recognizable”—legible to search algorithms.
Those working in SEO play an especially important role in the search engine ecosystem. Cast as villains by some (Segal, 2011), SEOs are making a living and helping others make livings and ways of living amid an ever-shifting system with large cascading effects that often favor the already privileged (Introna and Nissenbaum, 2000) and doing the work others are unwilling to take on (Ziewitz, 2019). In fact, Lewandowski et al. (2021) conducted an analysis suggesting “a large fraction of results available to users is optimized through SEO measures.” Building on this literature, our study examines the organizational response of the GSL to complaints on Twitter, which provide further detail and sometimes a dynamic back and forth about Google’s search engine logics and what searchers—including from those working in SEO—expect from search.
Methods
This study collects 6 months of tweets from the GSL and inductively codes the responses to searcher complaints on Twitter.
Data set construction
We collected public conversations from a 6-month period on Twitter that included responses to complaints from the Twitter account the GSL uses in responses, @dannysullivan. We select Twitter as our research site because that is where the GSL has communicated and demonstrated they can be reached. We did not try to count, collect, or analyze all complaints or all complaints with responses from Google employees, only those complaints that received a response from the GSL.
To align with best practices for Twitter data set construction, we reviewed Fiesler and Proferes’ (2018) study of “perceptions of the use of tweets in research” and the Association of Internet Researchers’ ethical guidelines for Internet researchers (Franzke et al., 2020). The complaint tweets are quoted anonymously. The responses from the GSL are identified as such. The tweets under study are neither private, embarrassing, nor offensive. The context of the complaint tweets are as publicized complaints in pursuit of recourse or remedy which this research is directed to further. The context of the GSL tweets are tweets made in their professional capacity in a venue they have identified elsewhere as a space for the searching public to interact with them. Neither the GSL nor any other Twitter user’s Twitter history is the subject of study. We did not seek to identify characteristics of participants beyond the readily visible information in the complaint conversations, profiles, and links they shared. No tweets were collected from protected accounts and no deleted tweets were retained. The published research will be shared with the accounts under analysis.
We collected 632 GSL replies (not original tweets or retweets) from a nearly 6-month period (1 June 2021– 13 December 2021) using Twitter API v2 for academics. In addition to the responses to complaints of interest, this data set included replies unrelated to search and replies to tweets focused on broader search policy questions. The responses of interest for our research were only those responding to complaints about particularized experiences with a search engines results page (SERP).
Each author manually labeled 3 months of the GSL tweets as relevant or not to our task, arriving at 426 identified as relevant (we discussed tweets either author thought was possibly relevant). Those tweets appeared in reply to 167 unique tweets (identified by Twitter’s conversation_id). Finally, using the Twitter API, we collected all branching replies and replies to replies for 167 unique conversations.
Limitations
The data set construction choices limit our analysis to only the GSL’s public responses (and for only approximately 10% of the role’s existence). We conducted no analysis of complaints about Google Search (tagging the GSL or not) to which the GSL did not respond. We also did not collect data for analysis on comments from the GSL outside of Twitter (whether in press interviews or Google forums), Google’s promotional material, Twitter responses from other representatives of Google (such as Google’s ads product liaison or those focused on webmaster relations at Google Search Central), or tweets from the SearchLiaison account that were not linked to by the GSL. Nearly, all of the responses we examine pertain to complaints about English language searches. This presents some bounds on the geographic scope of the study (though complaints do appear to come from places where English is not an official language) but binds it to the range of the liaising work performed by the GSL on Twitter over this 6-month period.
Analysis procedures
We followed a qualitative open-ended inductive coding process to analyze the tweets (Burrell et al., 2019), though our unit of analysis is the conversations and particularly the interaction between complaint and the GSL, not individual tweets. The authors started by coding the same few conversations to inductively develop an initial codebook that described the identity of the searcher, the components of the SERP mentioned, type of harm described by searcher, the content and tone of the response by the GSL.
We looked at the text and images of the tweets in the conversations, the complaint and the GSL response tweets in context on the Twitter desktop web interface, Twitter profile descriptions, and links shared within conversations in our analysis. We randomly assigned the 167 conversations between the two co-authors. The authors met regularly throughout the coding process to discuss changes to the structure and facets of the codebook and emerging themes in the data.
In this coding, we started with a shared spreadsheet where for each conversation (our unit of analysis) we recorded information regarding the following: complaint summary, description of complainant (generally included occupation, country, prominence of Twitter profile [verified by Twitter or not; small, medium, or large account—which had no quantitative threshold]), critique level (high, medium, and low), screenshot present (true/false), does the complainant identify the feature critiqued (true/false), feature critiqued, search harms (as identified by the co-authors, not restricted to harms identified, these were not harms from the GSL response but identified in the experience with the search engine), explanation of breakdown from complainant, explanation of breakdown from the GSL, and response from the GSL.
For nearly half of the coded conversations, SEOs tweeted the complaint. We avoided in-depth coding of SEO complaints that on review were regarding policy choices and broad core updates to Google’s search algorithms unless pertaining to particularized experiences on SERPs. We did still review those complaints and the GSL responses for developing a wider context. We did not code complaints or responses if the original complaint had been deleted (two conversations).
Findings and analysis
The GSL did not respond to every complaint in a conversation and did not always respond to the core or underlying complaint from tweets that they did respond to. The responses by the GSL visible to us generally were non-combative and often include a brief reference to policy, sharing a search technique, explanations of varying depth, and/or a comment that the issue will be passed on to other employees at Google. The GSL sometimes apologizes and admits a goal was not met (these complaints often involve racist search results). We argue that the GSL functions through defending, repairing, and drawing boundaries around Google Search.
Defending search
One function of the GSL is to directly defend the way Google Search operates. These defenses follow a few common patterns. These patterns often discount, but do not generally deny, the experience of the searcher. This is especially so when someone is complaining about something prominent in the results that appears wrong or incomplete. Despite Google’s “design shift” to appear to be an answer engine (Tripodi, 2022: 115–116), the GSL suggests that searchers investigate beyond the rich content (which is core to “answer engine” design) to look at the whole page, look at the pages linked to not just their snippets, and look at what Google generally does. A related defense pattern is to suggest that Google is showing results that reflect what is on the web. These various defenses generally rely on pointing to web search as a large-scale dynamic system. The GSL argues that lower-quality results on certain queries or in top results should not be judged as individual results but in the various larger contexts. Four examples below demonstrate these patterns.
A reporter complained that the top of an SERP for the query ‘Jeffrey Donson’, the name of a convicted child rapist in South Africa recently elected mayor of a municipality, was used to display a “rich result” for an eponymous song publicized by his political party rather than this other information (of public importance) about the individual. The GSL appealed to dynamic function of search and defended the search results page by stating that knowledge panels “are all automatically generated, and one might eventually come.” They also shifted focus away from searcher’s concern about the prominent display of innocuous information. They shared a screenshot of the results and called the three blue links depicted in it “the results,” saying: “But the results show far more than just a knowledge panel.” The blue links linked to resources discussing the man’s prior conviction, indicated as well in the snippets summarizing the links.
In response to complaints about the lack of prominent display highlighting a matter of civic importance (the fall 2021 statewide and municipal elections in New York), the GSL acknowledged Google had in the past “done special features on voting info for broad national elections” previously, noting it was: “in addition to the results themselves” (the GSL also noted they would pass along the complaint). To another complainant on the same topic the GSL included within their reply: “if you do search that way,” searching ‘new york elections’ [sic], “we do list a page with voting info.” While this complainant seemed hopeful, noting it seemed Google was on a path toward highlighting relevant voting information at the local level, GSL shifted attention to “the results themselves” rather than on other parts of the SERP that Google uses at times to prominently highlight information.
Someone complained that the query ‘Top International Goal Scorer Soccer’ showed a featured snippet for a page only about international men’s football and highlighted the top goal scorer on that list, rather than the top international goal scorer: Christine Sinclair. The GSL replied, in part:
Not an excuse, but one thing that can help us is if the web itself reflect [sic] such things. Like if anyone makes a list of top scorers, include both men and women or don’t omit the male label if it’s all men.
(At the time of writing, Google’s featured snippet for that query still fails to identify the correct answer. Snippets for all but one result on the SERP list only men, with the other snippets showing an incorrect extraction from a page with the correct answer). These defense patterns naturalize the logic of Google Search by pointing to automatic or dynamic systems presented as seemingly necessary for these various rich features of Google Search today and asking searchers to bring a critical lens not to the search engine but to the results.
Performing repair
We describe part of the GSL’s work in terms of repair: (1) noting that they are “passing on” user complaints to engineering teams and (2) suggesting actions for users to carry out. We use the phrase “performing repair” to highlight our focus on the practices of repair, not the end result, and the public performance of repair practices. The notes that the GSL is “passing on” a complaint or the suggestions of user actions are generally packaged with explanations that define the problem in a technical way. The GSL repair work is oriented toward stopping issues from continuing, not on repairing or remediating harm from the search results.
The GSL’s responses to complaints often include a note that they would or did “pass it on.” This suggests the promise of Google undertaking a repair. The GSL signals certain complaints are worthy of engineering attention and resources. The GSL does not make clear why some are viewed as worth such attention, while others are not, however, certain forms of evidence appear to be important to accessing this potential form of repair.
If complainants do not include a screenshot, the GSL often requests a screenshot before other actions are taken by the GSL. While the screenshot likely acts as a tool for the GSL to understand the additional context of the issue surfaced by the complainant, it also appears that the GSL needs to see the issue via the screenshot for it to be viewed as a legitimate issue to be passed on. It is rare or with caveats that the GSL “passes on” complaints that they are unable to replicate. By making the threshold for a search failure to be viewed as worthy of repair it must be experienced not only by the complainant but also for the GSL.
The GSL responded to a tweet expressing outrage over the conflation between the racial demographics of a city and public safety. The tweet had a screenshot of a Google Travel search “frequently asked questions” feature. The safety tab of the frequently asked question (FAQ) panel about Ft. Collins, Colorado included the suggested question: “What percentage of Fort Collins is Black?” The GSL responded,
Our sincere apologies for this. I’m off today but have passed it on. These are auto generated. I suspect it’s because there’s an article citing the city’s Black director of safety talking about the culture shock of arriving. Our systems likely saw it as an artcle [sic] about “safety.”
Here, the GSL performed repair by explaining how the large-scale dynamic system of search got it wrong, apologized, and noted they had “passed it on.”
However, it is not clear to searchers what is worthy of being “passed on” to the search team via the GSL (let alone the message itself that is “passed on” and what happens after). The BTS fan community, or BTS ARMY, is a social movement, formed around a Korean boy band, with a history of online collective efforts (Burrell et al., 2019; Kanozia and Ganghariya, 2021; Park et al., 2021). Members have had a range of engagements with the GSL. They have also organized collective reporting of issues through the Google feedback mechanisms on the SERP.
5
Referring to feedback sent through the SERP reporting mechanisms, a BTS fan account complained to the GSL about the incorrect image displaying for the band:
This issue has happened with Jimin as well as other BTS members in the past. We know you’ve mentioned before there’s no need to tag you as Google gets this feedback ordinarily, but we would appreciate if you can check as to why this keeps on happening.
The GSL responded, “It hasn’t happened for ages, I believe. Sending feedback [via the search result page] is the correct thing, although these things tend to correct automatically.” The GSL does not clarify why “these things” will resolve themselves and others need to be “passed on.”
The second type of GSL repair performance is to ask users to make themselves, or their subject of interest, more “algorithm ready” (Gillespie, 2014). In one example, a Black woman complained that her picture was blank for a rich panel about a Netflix show she hosted: “When bots are straight up racist First season my name wasn’t even showing . . . now my picture is missing randomly.” The GSL responded with an apology, said that they would be looking into it, and provided a brief explanation of the dynamic nature of their systems. The GSL then said: “one thing that might help us is if you want to claim your knowledge panel.” The knowledge panel is a dynamically generated box that appears on the right hand side of the SERP, which can be edited in some ways once the subject of the knowledge panel successfully “gets verified” by Google and claims the panel.
Sometimes, suggestions for searchers to engage in repair come before or alongside a larger defense of the failure of Google to not reproduce societal biases. A law professor noticed that her male colleagues have the title professor in their Google knowledge panels while she has the title researcher. She asked: “Anyone knows how the Google knowledge panel algorithm decides which professors get a “professor” subtitle?” Engaging with others in her replies who pointed to potential causes or fixed, she noted that her complaint was not principally about her, but what appeared to be a pattern of gender discrimination in knowledge panel subtitles visible in other searches she tested and “the potential underlying algorithmic bias.” The GSL responded that the knowledge panel subtitle is “automatic based on references we see across the web” and provided instructions for claiming a knowledge panel, which would then allow the claimant to “suggest an edit.” The law professor responded in part that she was “less concerned about my own subtitle than about the apparent gender skew in which the US law professors get subtitled “Professor” by the algorithm.” The GSL responded, saying they would pass this on and that “It also helps if the references across the web are also more equitable . . .” Two weeks later, the same law professor tweeted that Google had revised her knowledge panel. She shared an image with her subtitle updated to “University teacher.” (At the time of writing, her knowledge panel lists her simply as “Author.”) The GSL’s repair performance responses do not engage in substantive remediation but suggest only potential technical fixes for some issues and work for the search users to take on for others.
Drawing boundaries
The GSL reinforces Google’s definition of terms to dismiss complaints. To explain the performance of search, the GSL leverages narrow definitions of terms that are distinct from how they are used by those posting complaints. When someone complained about a press release appearing under “Top stories,” the GSL argued that “Google News shows news-related content, not just news stories themselves.” In another instance, an SEO complained about adult toy e-commerce websites not ranking well for branded searches, sharing a screenshot with boxes, and labels identify expected and unexpected behavior. In replies, the GSL pushed back that it was not a branded search because the brand for that website is one word rather than two (as in “CamelCase” rather than “Camel Case”) and that the website did rank well for its proper brand name. Despite these objections, the GSL acknowledged it was worth passing on. There was also a complaint from a group of bloggers about all their work on their website being misattributed to just one of them in the search result titles. The links to their work in the SERP appeared as “[Title of Work]—[Incorrect Author Name].” While again, the GSL noted that they would forward the complaint, they explained how titles are generated dynamically and said that it is not intended as an attribution of authorship.
One example regards the use of personalization. A prominent technology journalist
6
made a comparison between Facebook and Google that suggested search results were personalized. The GSL replied, acknowledging the comparison as a joke and saying “It’s extremely rare that personalization has any impact. Two people in the same place, searching in the same language for the same thing will see largely the same results, even with continuous scroll.” The journalist mused in reply about the use of “largely” and suggested factors like location, language, and time might be a “pretty potent” example of personalization. The GSL responded, as they have many times about personalization, that personalization would mean “results unique to a person based on a person’s searches.” The GSL embedded the top tweet of a thread from Google’s official @SearchLiaison account, saying: “This explains it more”:
Over the years, a myth has developed that Google Search personalizes so much that for the same query, different people might get significantly different results from each other. This isn’t the case. Results can differ, but usually for non-personalized reasons. Let’s explore . . .
In the conversation above, the GSL sets out to address “a myth” that “has developed” about the extent of personalization in Google Search. This definition discounts searchers’ understandings of personalization; the origin (including Google’s well-publicized efforts in the area) and recent appearance of the supposed myth are ignored; and no mention is made of how Google’s search interface might reinforce or sustain such myth-making. What is described as not existing is a “non-anonymized identity-based” personalization that appears distinct from the general use of the term by those making the complaint on Twitter.
This disavowal of location as productive of personalization stands in contrast to searcher’s reasonable understanding and experience and the stated position of Google elsewhere. In Google’s descriptions of its “personalized advertising” policy, it includes “location targeting” as one of the “personalized advertising targeting features.” This framing of what personalized means also discounts other factors that people may see as personalization (the effect of recent searches and search predictions) and factors people may not always recognize as generative of a distinct search experience (the individual searcher’s particular choice of terms in search queries). In the conversation, the quote tweet from @SearchLiaison is presented as a definitive statement on the matter, without connection with research.
Discussion
The GSL’s responses to complaints about Google on Twitter, deepens our understanding of how Google articulates and manages expectations of the product and ideology of search. Gillespie (2014) argued that “sociological inquiry into algorithms should aspire to reveal the complex workings of this knowledge machine, both the process by which it chooses information for users and the social process by which it is made into a legitimate system.” We recognize, from Cotter’s (2021) analysis of “black box gaslighting,” that “platforms hold the upper hand,” in the “epistemic contest over the legitimacy of critiques,” and that this shapes the tenor or confidence of complaint. Our inquiry documents how Google uses strategies of defense, repair, and boundary drawing to legitimize search and resist critique. We focus our discussion here on how the functions of the GSL affect the perceived functionality of Google’s search engine logics, reproduce power relations, capture complaints, and resist counter-imaginaries.
The defense, repair, and boundary drawing work the GSL does deflect from larger questions of whether search “works.” Raji et al. (2022) urge us to not take the functionality of AI systems, that they work, as a given—but rather closely examine the assumption of both promoters and critics of AI systems. At a high level, the tweets we collected—user complaints about search—tell us that Google Search sometimes does not function as users expect. While the GSL may further traditional public relation objectives (e.g. deflecting criticism, values advocacy) (Bostdorff and Vibbert, 1994; Einwiller and Steilen, 2015), it also reveals how the articulations of search by the company shape our search imaginaries and so that our expectations and reliance. Analyzing Google’s organizational response to complaints about search, the GSL articulations, furthers our understanding of the logics that underlie the search engine.
Google’s dominant market position and ubiquity has reinforced the myth that Google is infallible—with research demonstrating that people are more influenced by the ranking of search results than other credibility cues (Haas and Unkel, 2017) and noting design changes that present Google Search as a destination with answers rather than intermediary presenting useful resources (Tripodi, 2022). Using the strategies of defending, repairing, and boundary drawing, the GSL reframes the complaints as unavoidable, and acceptable, consequences of operating at scale. The overall functionality of Google and its AI systems are rarely—if ever—implicated in GSL responses. Recall the misleading seizure snippet. As Raji et al. (2022) explain, the failure of Google’s language models to correctly parse advice about what to do when someone has a seizure is a functionality problem, more specifically a post-deployment robustness issue. The GSL’s public responses to that viral tweet were only “Thanks, we’re working on it.” and “We’re looking at this now to get it resolved.” The GSL did not engage with many in the replies who pointed to the misleading snippet as an indictment of Google’s entire approach. 7 Reframing complaints as evidence of failures of Google’s AI systems, or of search engine logics, to function properly clarifies how the role of GSL works to define, and shape how search should and does function.
Burrell and Fourcade’s (2021) work on the “coding elite” helps us unpack the cultural, political, and economic capital that of those involved in the technology industry ecosystem (e.g. computer scientists, tech CEOs, venture capitalists). They explain that the power of the coding elite resides in their control of technique (ability to specify and automate the rules embedded in code). Part of their control comes from “the allure of prophecy, spectacle and promise.” In the case of Google, while the mythmaking and imaginaries created by the coding elite is diffused into public consciousness, the control of the technique remains with coding elite.
Our results provide detailed examples of how the “coding elite” use their networks and knowledge of how to communicate technological breakdowns in a way that might lead to their willing resolution by a technology company. While we do not find that the GSL is any more likely to agree with or “pass on” the concerns of other members of the coding elite, we find that there is an ease of communication between the GSL and other coding elites.
Even others who would traditionally be thought of as “elites,” like law professors and journalists who do not specialize in technology issues, struggled more understand the logics and techniques of search as articulated by the GSL. The ability of SEO experts and web developers to more easily provide feedback to the GSL (via shared vocabulary and practices) exemplifies Burrell and Fourcade’s (2021) discussion of how the “cultural circuit” of the coding elites furthers “myth-making and consolidate[s] power into the hands of those able to implement and understand code and the institutions and individuals who fund them” as well as a product of a shared “professional vision” (Goodwin, 1994).
Our results also reveal issues that differentially affect those not among the “coding elite.” Burrell and Fourcade (2021) argue that “the core divide in digital capitalism opposes what we call the coding elite, who hold and control the data and software, and the cybertariat, who must produce, refine, and work the data that feed or train the algorithms.” Others have written about the work of producing content to be found by the search engine and the behavioral trace data from users searching (Introna and Nissenbaum, 2000; Zuboff, 2015). We view the overall process of providing feedback to Google via Twitter (as in our data set) or via feedback links directly on the SERP as ways for the cybertariat, the non-coding elites (Burrell and Fourcade 2021), to further refine and repair the system of search. Our research shows how GSL polices the lines between the coding elite and cybertariat by assigning responsibility for repair to those who produce content or other data. As discussed in the “Performing repair” subsection, when users complain about inaccuracies about knowledge panels, GSL commonly instructs users to manually improve the data themselves (e.g. claim and edit a knowledge panel).
We see the responses from the GSL at times “domesticating” voice (Hirschman, 1970). Rather than the GSL role being an example showcasing the benefits of “contestable design” (Mulligan and Bamberger, 2019; Sloane, 2021), those contesting or complaining are dissidents sometimes successfully transformed into “workers who can be folded into” Google’s system (Nissenbaum, 2004). We find seemingly satisfied desires for status and belonging in many examples where complainants are apparently mollified by the appearance of being heard, by being told their complaint would be passed on, with their complaints framed by the GSL as helping improve the system of search (“It really helps us in debugging and improving.,” “helps us look into how to improve,” “Really helps us in thinking of how to improve.”). In this vein, the GSL wields experience and authority that pushes searchers to “rely less on their own experiences and insights” and to “question their own judgment” of what web search is or should be (Cotter, 2021).
Mager and Katzenbach (2021) summarize research showing how the “dominant future imaginaries” of technology companies “emerge and spread, how they compete with alternative visions, and what mechanisms prevent counter-imaginaries from proliferating.” They close by saying: “How to intervene in these dynamics and contribute to more open, democratic, and sustainable digital futures will be a key question to be addressed in future research and political action.” We see the GSL’s function as in part to “prevent counter-imaginaries from proliferating” (Mager and Katzenbach 2021). Identifying and analyzing such functions is one intervention that acts to “undercut the facticity of the political intentions embodied” in the functions of the GSL (Pfaffenberger, 1992).
Conclusion
Google Search is a complex assemblage of actors and practices, which extends to the organizational articulation of search. Search is not only the crawling, indexing, design of the search box with autosuggestions, search results page with links, ads, and rich content, but also the articulation. The GSL was introduced at a particular time to address search complaints, to “compete in the public dialogue with other articulations” (Gillespie, 2014). The GSL articulates Google Search in particular ways. The GSL “leverage[s] their epistemic authority to prompt users to question what they know about [the Google search] algorithms, and thus destabilize the very possibility of credible criticisms” (Cotter, 2021).
The limits of search are not well defined, broadly understood, or consistent. Maintenance of the belief that search works requires active articulation in the form of boundary drawing, performing repair, and defending how search works.
Footnotes
Acknowledgements
The authors thank the many who reviewed and provided comments, including Deirdre Mulligan, Anne Jonas, Elizabeth Resor, Richmond Wong, and participants in both the Data & Society The Social Life of Algorithmic Harms Workshop and the UC Berkeley School of Information Doctoral Research and Theory Workshop.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
