Abstract
With Google marking its 20th year online, the piece provides a retrospective of cultural commentary and select works of Google art that have transformed the search engine into an object of critical interest. Taken up are artistic and cultural responses to Google by independent artists but also by cultural critics and technology writers, including the development of such evocative notions as the deep web, flickering man and filter bubble. Among the critiques that have taken shape in the works to be discussed here are objects and subjects brought into being by Google (such as ‘spammy neighbourhoods’), Googlization, Google’s information politics, its licensing (or what one is agreeing to when searching) as well as issues surrounding specific products such as Google Street View, as Google leaves the web, capturing more spaces to search.
Introduction
In what follows is search engine critique, concerning Google in particular, through the prism of Google art and cultural commentary. The point of departure is that various cultural products and concepts aestheticize Google critique. 1 The review is by no means exhaustive and has a predecessor (from Google’s 10th anniversary) in the video work, ‘Google and the Politics of Tabs’, a compilation of Google’s front page changes over a 10-year period in the style of time lapse photography (Rogers R and Govcom.org, 2008). The movie follows the tabs above the search bar, especially the appearance of the (human-edited) directory in 2001, followed by its steady relegation since 2004, from being placed under the ‘more’ and ‘even more’ buttons before disappearing entirely. In all it tells the story of demise of the directory (by 2008), and the rise of the algorithm and the back-end, taking over from the librarians and the human editors of the web.
When discussing Google art and culture, there are a few projects that go by the name but do not belong here. If one were to type ‘Google art’ into Google, the first results are likely Google’s very own project, referred to as its Cultural Institute. Since 2011 Google has been entering into museum partnership agreements and digitising at very high resolution some of the museum’s premiere artworks; they also have created a series of virtual tours through art institutions, with information layers atop. Users can be their own curators of the world’s art and create their own DIY galleries. It is a case study worthy of attention for those interested in the online curation of art as well as the business of digitization. But it also could be the jumping off point for a Google critique, calling it a ‘piracy platform’, ingesting and giga-pixelating artworks, and googlizing museums (Dowd, 2015). The Louvre and the Prado, for their parts, chose not to participate.
One other reference point for ‘Google art’ could be Google’s Doodles, the changing Google logo art that appears on the interface of the search engine. Indeed, if you search for ‘Google art’ in Google Images (rather than in Google Web search), the top results are likely to be these Doodles. They have been around since Google’s inaugural year in 1998, when the founders made the first one of the Burning Man festival, and have evolved from being static and sporadic cartoons to elaborate and routine animations and miniscule interactive games. There is a coterie of Google Doodlers on staff. In recent years, Google’s Doodles have become an object of study and papers have emerged about them, discussing Google’s ‘fluid brand identity’ as well as a gender and racial bias indicated by those chosen to appear on the front page interface (Elali et al., 2012; Montaño and Slobe, 2014). Google’s Doodles largely fall into two broad categories: that of great achievements of humankind (and their achievers), and national holidays on the other, such as ones that have appeared on the Polish national day as well as on the Mexican Day of the Dead. These seemingly innocuous Doodle types actually represent two significant sides of Google’s preferred image: the global and the local (or glocal). That is, they befit the two kinds of messages that Google would like to communicate about itself: Google web search as belonging in the lineage of the great creations, and Google as a series of national machines (hewing to national legal jurisdictions) rather than merely a single, universalising one that Americanizes or globalizes online cultures and search markets. There was once a trivia question concerning the few national online spaces Google search does not dominate, owing to still vibrant national engines or legacy partnerships: China (Baidu), Russia (Yandex), South Korea (Naver), Japan (Yahoo!) and the Czech Republic (Seznam). (Taiwan (Yahoo!) was often in the mix, too.) Nowadays that list is shrinking with perhaps only China and Russia having clearly dominant national engines, if one aggregates desktop and mobile search.
Piotr Parda (2006), the Polish artist, was one of the earliest to make a series of artworks using the form of Google’s Doodles with his ‘On Occasion’ project that comprises a series of logo alterations that comment on what Google does not address. Parda’s portray what they might look like if there were ones for HIV/AIDS, the crisis in Darfur in South Sudan or the Asian tsunami and its victims (see Figure 1). Google’s demureness towards doodling the issue of HIV/AIDS on its international awareness day (1 December) has drawn attention most every year since 2010 (Anderson, 2012; Baughman, 2010; Fratti, 2014). Doodles have been made for other days on the international issue calendar such as Universal Children’s Day; the occasional ribbon will appear under the search bar on other meaningful days, such as International Women’s Day. Thus, there is an issue day hierarchy – those with doodles, those with ribbons and those without acknowledgement. Whilst not a doodle per se, one major exception to the apolitical Google web search interface occurred in 2012 when the company, like a number of particularly US-based tech firms, protested the Stop Online Privacy Act or SOPA, the US legislative proposal, by blacking out its logo, in the style of redaction, thereby joining other organizations including Wikipedia that ‘went black’ entirely for a day.
AIDS Google Doodle. Source: Parda (2006).
When discussing how Google critique assumes cultural forms, I would like to venture further than its globalizing, depoliticizing interface, and touch upon quite specific treatments. There are four different types I identify. The first is what I call Google objects and subjects, which are things and people that Google brings into being, such as the deep web, flickering man, attention deficit and filter bubble. A further Google embodiment is the data body, one of the terms that refers to the collection of data on you that in itself ‘acts’. The second category of critique is Googlization; it is a notion coined by tech journalists but which has been taken up in particular by library scientists in the late 2000s, during which time well-known books were written called
Google Information Politics is the third type of critique, referring to a series of epistemological crises concerning censorship and results ordering, which Google has become embroiled in over the past couple of decades. I would like to highlight the case of Google China in particular, when Google was caught furnishing state-filtered results, in an interface, created by the Citizen Lab at the University of Toronto, that placed google.com’s and google.cn’s respective engine outputs side by side. The other kind of information politics concerns more specifically how Google orders and ranks websites in its search results. Certain websites are privileged by Google and others are not, and work targets the issue of whether all websites receive equal treatment by Google. A handmade gif, ‘Wikipedia is the new Google’, captures the seemingly hard-coded appearance of Wikipedia at the top of substantive engine queries. In this context, the notion of ‘spammy neighborhoods’, a characterization offered by Matt Cutts (2006), the long-time in-house blogger, is also central in the privileging question, for it introduces parts of the web populated by content repeaters, illegitimate aggregators, pirates and other engine fodder makers that Google’s frequent algorithmic updates address and, in fact, suppress. In a sense, ‘personalization’ or what Eli Pariser (2011) famously referred to as the ‘filter bubble’ extricates the engine from the debate surrounding universalizing engine results, in that they are now co-authored by engine and user. No longer are Google results the product and purveyor of the Matthew Effect (rich becoming richer), however much even with personalization, the engine still regularly puts it on display. With personalization, the user also authors the results, shifting the blame away solely from the engine.
A fourth critique of Google, targeting the topic of licensing, concerns what one agrees to when typing something into Google’s search bar and hitting return, clicking on search or the ‘I’m Feeling Lucky’ button, Google’s intriguing, lesser used option that bypasses Google’s ad-serving pages and main revenue source. Whichever form of activation is used, one enters into a contract with Google, and as such that deal has been worthy of exploration, including the futility of ‘agreeing’ to the contract, as well as the derivative works one could make of Google results, if they were not forbidden and if the contract one has entered into were broken.
Most recently, certain Google products have been the object of scrutiny, especially Google Maps and Street View, whose camera cars photograph houses, sidewalks and streets, and stich them together. Having lost the ‘social space’ to Facebook, Google’s quest to dominate the locative space (maps) may be regarded as search in need of space or an attempt to create or at least enclose new spaces for its search technology.
Google objects and subjects
The first example of an object that Google (and its early competitors such as Northern Lights, Excite, Alta Vista and Lycos) brought into being is the deep web. The term was coined around 2001, having emerged from studies in the 1990s of the heretofore ‘invisible web’, where researchers found that search engines index only a relatively small portion of the entire web (Lawrence and Giles, 1998, 1999). At the time, engine coverage, whilst varying by technique, was at most 16%, meaning that there is this other web ‘out there’, beyond reach and ken. It turned out to be vast, and far greater in size than the World Wide Web (Bergman, 2001). This deep web is often depicted as the submerged core of an iceberg, with its tip being the small part of the ‘surface web’ that one can access through search engines skimming its top layer. Later, the unindexed web darkened and also became a kind of temporary autonomous zone. Initially one could still Google parts of it, locating BitTorrents and other ripped and remixed content, often under copyright, that had been uploaded for others to download. With the rise of illicit or dirty downloading – before the cloud would clean that up – came defences of remix culture and off-shore server farms, such as the Principality of Sealand, a disused British sea fortress in international waters, that drew interest from the Pirate Bay, the Swedish file sharing site and global social movement, and was the source of the Metahaven’s Sealand Identity Project, and other critical identity work on data havens and alternative clouds (see Figure 2). Gradually, as national jurisdictions took on the pirates, bringing copyright infringement lawsuits, a still darker web emerged, which Google perhaps chooses not to index; apart from references to experiments in 2008, the official Google blog rarely mentions the deep, and never the dark web. The online underworld (notably the erstwhile Silk Road run by Dread Pirate Roberts) is made available not through Google but Tor browser search, where one reaches so-called onion land.
Rendition of uploading a file to the Icelandic cloud. Source: Metahaven (2013).
A related object brought into being by Google is the orphan website. This is a sad site that through its lack of inlinks does not become indexed by Google, and thereupon does not receive attention. It resides in not so much a deep but a kind of pitiful web, which does not garner (engine) traffic, and does not have any comments on its blogs, any ratings on its sites and is never liked, even if the sitemaster has taken the trouble to implement social buttons (Lovink, 2008). In this web, sites are not returned in Google’s search results and, hence, they are also neglected. Artworks have commented on how Google buries websites and disregards orphan websites. One is called Shmoogle, created by Tsila Hassine, joined later with a group called De Geuzen. When one types a query into Shmoogle and hits return, it randomizes Google’s results (see Figure 3). Together with ‘democratising’ sources, Shmoogle also seeks to intervene in the hierarchy of source credibility suggested by rankings. As the artist writes: [L]et’s take “art” for example. Google’s first page consists of the Metropolitan museum, the National Gallery, MoMA, and some art portals on the web (not much of a surprise). On Shmoogle, a (possible) first page features sites entitled “we make money not art”, “Olga’s gallery”, and “Art Passions”, among others - did you know these sites exist? (Hassine, 2005)
Shmoogle. Source: Hassine (2005).
As fewer results pages and fewer results are perused, the greater the value of the front page real estate, or the top of Google’s search engine results page, known in the Search Engine Optimization (SEO) literature as SERPs. Heat maps of user eye movements across these results produced the object called the ‘Golden Triangle’, the area at the top left where most eyes gravitate the longest (in left-to-right reading cultures) (Mediative, 2011). In a more recent study the same digital agency found that people are now looking further down the SERP. Instead of the ‘Golden Triangle’, the image outputted in the heat map work looks more like a scrolling finger, somewhat similar to the oft-remarked F-shape of user gazing. The noticeable change in behaviour may be the result of smartphone user scrolling together with a variation on ‘banner ad blindness’ and dissatisfaction with the results; one glazes over the Google properties at the top of the returns, such as the Google News or Google Images sets, until setting one’s eyes on the top organic results, as they are called by industry.
Having appeared under a few names, the third Google subject is the data double, software self or the data body. Data double (put forth by Mark Poster (1990)) and, later, data body (by the Critical Art Ensemble (1998)), both describe the aggregated data points collected about an individual, kept by governments (turning one into a number, in the 1960s-style critique), or by corporations, making one a niche market to be behaviourally targeted. The various data points collected such as flight itinerary, credit card type, special meal and nationality of passport via the U.S. Advanced Passenger Information system arrive to the authorities before the air traveller does, resulting in an advance profile or ‘data derivative’ that is acted upon by the security team, such as in an additional screening (Amoore, 2011). The data body as referred to in the case of Google (or search engines more generally) is considered to be a new one, brought into being on the basis of one’s search history. An example is the case of America Online search engine, when it released user search histories to scientists in 2006. The engine company released search queries from a period of six months for hundreds of thousands of users. Each individual’s search history, or data body, was anonymized in the sense that each was given a number. One is AOL user 311045, who apparently owns a Scion car, is interested in the US Open, but also has queries such as [how to get revenge on an ex], [how to get revenge on an ex girlfriend] and [how to get revenge on a friend who f—ed you over]. In his search history, 311045 then reverts back to the less animated [replacement bumper for scion xb] (McCullagh, 2006b). In this particular sense, John Battelle (2003a) has remarked that the search engine houses a ‘database of intentions’, one that saves one’s aims and plans prior to acting. Rather than new software selves as the life blog or quantified self, it was thought of as a private search self. At least no one would expect that one’s search history would be made public, given the usual context of searching (others peeking over the shoulder?). Indeed, AOL search engine users who were de-anonymized in the research data release said the same, including an old lady from Georgia USA (4417749) who was located and interviewed by the
There are at least two modes of Google use: logged in or not. The data body that Google has formed has more agency when one is logged in, for there are more signals to work with. Even when not logged in, however, results are personalized (or pushed) and data extracted (or pulled) because of the cookies that Google sets, and information it gleans (one’s location from the IP address, for example). Scroogle, in operation for about nine years before it was forced to discontinue in 2012, owing to changes Google made to its advanced query settings, sat on top of Google and ‘crumbled its cookies’. With the Dickensian Google logo (itself a kind of Doodle commentary), it invited one’s queries, and outputted Google results, without the user being tracked or without any data being collected. It would serve no ads. There would be no Google properties in the results, such as YouTube videos, Google Images, Google News. It pinched pennies in the sense that no revenue was generated by Google when queries were made through Scroogle.
Another reaction to Google’s collection of user data is the ‘artware’ Firefox add-on, Track Me Not. It is a play on words on the radio button in the browser’s privacy panel, Do Not Track, which only ‘asks’ websites not to collect data, in the voluntary industry gesture. Track Me Not practices the art of obfuscation, for when installed, the extension sends random text to search engines every once in a while, not allowing a ‘sensible’ search history (and data body) to be built. After Scroogle ended, questions arose (e.g., in articles in the industry-standard
Relatedly, privacy enhancing technology (as it is often termed) also should allow the ‘right to oblivion’, or the capacity for forgetting, a concern that ultimately became Google regulation in the European Union. In the EU privacy directive, individuals may make requests for ‘delinkings’, that is the removal of links from search engine returns that are personally damaging. The content remains online; it is only delisted from the SERP.
One final object (mentioned here) that Google has brought into being is the ‘filter bubble’, a term for the confined cognitive space one finds oneself in, after Google ‘filters’ results based on one’s data body. It was coined after Google ‘flipped the switch’ in December 2009 from universal results for all to personal results for each. Eli Pariser, developer of significant new media mobilization concepts – ‘moveon.org’ (for organizing people in 1998) and ‘upworthy.com’ (for virally circulating content in 2012) – speaks (in his Ted talk) of two friends who query ‘Egypt’, where one is presented with results about the Egyptian Revolution of 2011 and the other about holiday-making in the land of the pyramids. Incidentally, these results are both from Google.com, rather than from Google.com and Google.com.eg, respectively. The larger point Pariser makes with the filter bubble argument is that we do not know whether there is a difference, given Google’s ‘invisible algorithmic editing of the web’ (Pariser, 2011). Users rarely compare results with the last one or with each other.
Googlization
Googlization is critique of another nature than inclusion and exclusion or personal data collection and personalization, for it casts a much wider net about Google’s impact across societally significant institutions. Coined by John Battelle in 2003, who referred to it as a ‘creeping dominance of Google over nearly all forms of commerce on the web’, Googlization spells the end of the innocence of the Internet, and introduces a mass media critique of new media (Battelle, 2003b). When Wikipedia first asked its users in 2010 to donate, it promoted itself as one of the top five websites in the world with servers that need to be maintained and so forth. In contrast, ‘Google might have a million servers’, said Jimmy Wales, Wikipedia’s founder (Wikimedia, 2011). When considering that a search engine has a million servers, geographically distributed, one is no longer in a start-up environment. Given this scale of infrastructure, the question is, should Google be reframed as mass media?
If mass media is constituted by barriers to entry contiguous with large-scale production and distribution as well as striving to reach the largest possible audience, Google fits the description. New media were often distinctive from mass media, given the ‘interactivity’, but one cannot ‘talk back’ to Google. There is not a comment space below the search results, for example (as ridiculous as that may sound). Power is just as asymmetrical as when there is a strict separation between producers and distributors on the one hand, and receivers on the other, as with television. Another mass media critique is that relations between senders and receivers are commodified, impersonal and anonymous. Google has sought to change the advertising model, from broadcast advertising (say, billboards) to what is called direct or personalized advertising (Turow, 2006). This is advertising that is increasingly based on a personal profile of attributes and desires. Whilst the growing relationship we have with our search engine may be described as commodified, it is certainly not anonymous, whether logged in or not. Finally, the tendency to standardize content (downwards) does not appear to apply to Google, given personalization, however much the actual amount of personalized content in engine returns seems empirically low (Feuz et al., 2011).
In its early form, PageRank performed a kind of citation analysis, where websites rose in the rankings owing to inlinks from influential websites (Rieder, 2012). Some 500 or more so-called signals later (sometimes divided into ‘content factors’, ‘user signals’ and ‘technical factors’ by the SEO industry), Google Web Search, however, relies more heavily on user clicks rather than influential inlinks (web citations) (Smart Metrics, 2016). In other words, Google returns pages that have been ‘voted up’ by users, making it into a ‘popular content’ machine – rather than one based on web citations. Google, in outputting popularizing web search results, appeals to the masses.
Googlization could be said to have spread across front-ends and back-ends. Front-end Googlization would be the desire to implement (or emulate) Google aesthetics, including single input field, fast loading time, instant returns, anticipatory results, geo-detection, no settings or filters, hidden affordances (such as quotation marks for exact matches), and so forth. At the interface level, Google (especially when Doodle-free) is remarkably clean. It has decluttered itself over the years, shedding first the tabs in place since 2001 as well as the drop-down menu, upper left, that replaced the tabs in 2007 (as per the movie ‘Google and the Politics of Tabs’, mentioned above). There is just a single search bar, with two buttons, including I’m Feeling Lucky, a vestige said to be a cultural reference (from Clint Eastwood’s
The back end of Google is complex in other ways, as the phrase ‘multi-sided market’ would suggest, which is used to describe a platform’s business model. Google ‘coordinates’ multiple parties finding and doing business with each together, whilst being rewarded for their interactions. Back-end Googlization would be the uptake of such a market or ‘platform logic’ across the web (and app space), as practiced by Facebook, Uber, Airbnb, and others (Schwarz, 2017). How Google makes its money was described in 2002, in some of the earliest ad word art, as ‘semantic capitalism’ (Bruno). Google sells words. ‘Free’, it turned out, was the most expensive word of all sought. Ads must be ads, related to the website to which the user is sent, rather than poetry; after the artist created short ditties and embedded them in ads, his account was disapproved. More recently, Pip Thornton (2017) demonstrated the Google’s ‘monetisation of language’ by valuing entire books (like Orwell’s 1984) by pricing the words as AdWords. (1984 came in at £58,318.14.)
One of the most well-known works to explore Google’s back-end is by Ubermorgen et al. (2005) (Dewey, 2014). They generated revenue from the ads, and the money was spent buying company shares in Google. ‘Google Will Eat Itself’ relied on bots visiting a network of so-called hidden websites, and clicking on banner ads, which prompted the company to revoke the account (clickfraud). Google Will Eat Itself is one in a trilogy of projects (GAFA related) that pulls back the curtains on the back-ends (and business models) of the erstwhile new media. Amazon Noir (Ubermorgen et al., 2006) glued together the previewed pages of a number of books sold on Amazon (hacking the ‘search inside’ feature), making them available in noir or black market versions. The work describes Amazon’s history (and business model) as ‘hyper-contextualising’ every book with categories, tags, user reviews, ratings, author portraits, further recommendations, etc. until it finally introduced the sneak preview of the original text itself, whereupon book marketing became a tantalizing ‘cultural peep show’. The other is Face to Facebook (Ubermorgen et al., 2011), which scraped a million profile photos from Facebook, placed them on lovely-faces.com, and used image recognition software to sort them into categories like ‘easy going women’ and ‘climber men’. The artwork explores Facebook’s appeal as the encouragement of ‘comfortable voyeurism’.
Google information politics
A third cluster of critique that is generally made of Google is that of information politics. For example, in a project by the Citizen Lab of Toronto, queries for ‘Tiananmen’ were made in two versions of Google Image Search: google.com and google.cn (Google China at the time). The two sets of results appeared to be very different, with the one outputting pictures of the uprising in Tiananmen Square in 1989, including the iconic image of ‘Tank Man’ standing in front of a column of armoured vehicles. The Google China version excludes protest images, and instead replaces them, if you will, with the Tiananmen Square that is for tourists. (As mentioned above, that particular type of results discrepancy depending on the user (conflict versus tourism), was also used by Eli Pariser in his filter bubble story for the query, ‘Egypt’.) Google China arguably cleansed the historical record, neatly redacting or ‘touching up’ the Tiananmen photos, all the while following Chinese state censorship guidelines. Here information politics refers to the removal of unpalatable information for ideological, political or other purposes (such as state-run business). The company was accused of being a ‘functionary’ of the Chinese government (and a ‘sickening collaborator’) by US congressmen in the legendary congressional human rights hearings in 2006 that also witnessed testimony by Yahoo!, Microsoft and Cisco (McCullagh, 2006a).
How to repopulate the Chinese Google results with unfiltered content? The artist, Linda Hilfling discovered that misspellings such as ‘tianamen’, when queried in Google China, would return the Tank Man and other images from the 1989 events. That revelation led to the Misspelling Generator, which outputs related words to the search term (slightly misspelled) that likely would not be censored, and also could lead to the otherwise forbidden content (Hilfling, 2007). The tool is customisable: misspellings can be typographically or phonetically generated, with additional options to repeat or swap letters. Publishing misspelled words or coded language to dupe the censors is well known in China (and elsewhere), where in that context the meme, Grass Mud Horse, is often mentioned.
The second form of information politics is subtler and refers to how Google treats individual websites and whether it treats them equally. One would assume that if one were following a ‘pure’ PageRank algorithm on the web, all links would count the same; that is, the more links a website receives from websites that themselves have a large quantity of links to them, the higher that website would rise in the rankings. Over the years there arose ‘link fodder’ or ‘link spam’, which refers to websites created for the purposes of furnishing quantities of links to particular sites, thereby boosting them in the eyes of the algorithm. As a result, Google ceased equal link treatment, in at least two senses. First, the ‘No Follow’ tag was introduced to the comment space, as a directive to crawlers. Google pushed websites to implement the No Follow attribute in the comment space, and ceased indexing comments and links that appeared there. Having earlier co-produced the ‘deep web’, Google thereupon relegated the comment space to what is referred to as the ‘bottom of the web’ (Reagle, 2015). Second, Google began to identify what it called ‘spammy neighbourhoods’. These are the ‘bad’ areas of the web, where one might not want to visit, because of undesirable websites and their special activities. These are parts of the web with so-called backdoor pages and other black hat SEO practices in evidence. Through some of the major algorithmic updates (such as ‘Big Daddy’ and later ‘Panda’), Google no longer gave the links that came from spammy neighbourhoods much weight. Panda was described as ‘improving the rankings for a large number of quality websites’, when in fact it devalued web property. ‘Spam’ of course only seems to be a clear-cut product. ‘Franchise’ websites (such as 9/11 conspiracy websites with many local branches) would be affected for they often repeat content on every subsite, as did indymedia, the alternative journalism space. Business models based on engine queries also may have been affected. Demand media, for example, is a kind of digital sweatshop labour, which pays people to make videos, cheaply, for popular search queries. ‘How to pack for a trip to Rome’ is such a query, and in the video a woman lays out clothes, and discusses the weather and fitting clothing choice into luggage sizes. Such rather web native content, too, appears to have been affected.
Top RFID results from Google Images, categorized as ‘wet’ or ‘dry’. October 2007. Source: Digital Methods Initiative (2007). RFID: Radio-frequency identification.
Licensing and breaking the terms of service
There is a series of online software licenses, which one may or may not be aware of. The first one is called shrink wrap, a practice some consumer electronics still use. If one were to buy a CD or DVD, it would be wrapped in plastic or shrink wrap with a kind of holographic seal on it. The moment one breaks the plastic and seal, one agrees to a series of stipulations. A second tech license is called click wrap, and it refers to the ‘Agree’ buttons one checks or clicks online. Finally, a third one is called browse wrap, whereby one agrees to certain terms simply through the act of browsing. One does not explicitly agree, for that would be cumbersome. These licenses have been the source of a series of artworks, one of which is the ‘Whatever button’, a Firefox add-on that replaced ‘Agree’ with ‘Whatever’ (Stevenson, 2007). In a sense, it expresses the user behaviour of never reading the license (‘whatever’), but perhaps more to the point it relates the futility of disagreement. Similarly, turning off cookies, for example, would become so infuriating to the website visitor, receiving prompt after prompt, that even the Safari browser issues the warning: ‘Websites may not work if you do this’. ‘Participatory surveillance’ is the term often employed to describe the assurance of a seamless web experience. In order to participate online, one must allow cookies.
When searching Google, one agrees to at least three terms of service. The first is that you only search Google through the search bar, which may sound trivial but in certain contexts of work (running batch queries) it is not. The second point is that you agree not to save the results. Such would put paid to empirical work detecting the extent of personalization, for example, or studies of results like conflict versus tourism, discussed above. The third one is you also agree not to create a derivative work from the results. There have been several art projects and other software projects that have broken these terms. The first one is Newsmap, which won an award at Ars Electronica in Linz in 2004 (Weskamp, 2004). Newsmap sat on top of Google News and outputted a tree map, showing which news stories are resonating the most across Google News. It displays an attention economy. Newsmap breaks the three terms of service in that it likely does not search Google through the search bar, it loads the results in a database (however temporarily) and it creates a derivative work, the news attention visualization, from the results. Another project developed by the Dutch art group De Geuzen (2006) places the results of anxiety-related queries in local domain Google images side by side. The results of the queries ‘terrorism’, ‘conflict’, ‘financial crisis’ and ‘climate change’ each shows different levels of societal concerns, as expressed in the top images. (The project was discontinued when Google changed its advanced search settings, which is the same issue that befell Scroogle.) ‘Rights Types: The Nationalities of Issues’ also shows the top results for the query ‘rights’ across some 30 local domain Googles, allowing one to compare cultural concerns (Rogers et al., 2009). The ‘right to roam’ is particularly dear to Finns, for example. Finally, ‘RFID: Wet and Dry’ displays the top hundred (thumbnail) images from the query ‘RFID’ in Google Images, and indicates whether the representation of RFID is wet (humans or animals in the picture) or dry (non-humans in the picture) (Digital Methods Initiative, 2007). Is RFID only about logistics and warehouse packaging, or are pets and humans, together with their collars and garments, tagged, too? Newsmap, the Anxiety Monitor, Rights Types and others are all are derivative works of engine results, and Google’s forbearance would be required to display such politics of images and representation.
Finally, with respect to individual Google products such as Maps, Paolo Cirio (2012) utilized Google Street View to create the artwork called Street Ghosts, which is a series of Google Street View images printed out and glued in the same spots on the streets where images of individuals were originally taken. It contributes to the debate concerning how Google takes unauthorized pictures in the sense that it does not request permission to photograph people or their abodes. Google make addresses and streets searchable, and shows pictures of them for panning and zooming. It does have a so-called blacklist of properties and places which are not on Street View or Google Earth, for that matter, raising the question of how one would have one’s property removed from it, apart from having one’s city or country ban the Google vehicles. Preventing a drive-by may be of interest, too, given that the company acknowledged, in its collection of Wifi data, that ‘in some instances entire emails and URLs were captured, as well as passwords’ (EPIC, 2017).
Conclusions: Looking for spaces to search
To summarize, I have discussed at least four varieties of Google critique that have arisen over the past two decades, and how these have been conceptualized and rendered in art and culture: Google objects and subjects, Googlization, information politics as well as licensing.
From the beginning, Google has promoted a particular web epistemology that has evolved from universal results to personalized ones, making them befit the individual searcher and her (increasingly accurate) location. Engine use has evolved, too. Where one once consulted multiple pages, now only the top results matter. In fact, Google would like to provide the ultimate engine result – the perfect one – thereby transforming the web from a browsing and surfing space to a single Q and A. As a consequence of how users interact with Google, the very top of the results page, high above the fold, has become more and more valuable, as the eye-tracking study of the ‘golden triangle’ indicated. Google subsequently populated the expensive real estate with its own properties. Most recently Google has become a premediation machine, suggesting or autocompleting results for what one is typing (and thinking). It thereby massifies and flattens the Internet with everyone else’s searches, as the art group, Studio Moniker, pointed out with its work, ‘State of the Queries’ (see Figure 5).
State of the Queries. Google art from autocompleted results. Source: Studio Moniker (2012).
There remains the question of which results remain privileged, even if results are personalized. It is an inquiry of lasting interest, formulated two decades ago as the ‘preferred placement critique’ (Rogers, 2000). At the time Alta Vista was accused of obscuring editorial content (or organic results) with advertising. One could buy the top engine results. Advertising in search engines has changed, as one also can purchase words (in what Bruno calls ‘semantic capitalism’ and Thornton the ‘monetisation of language’). These ad products, whilst marked as such, still remain prominently placed, above the fold and higher than the so-called organic results.
Though there is a new term (GAFA, standing for Google, Apple, Facebook and Amazon) that captures the takeover of industries by ‘big tech’, and digital cultural imperialism more generally, Googlization continues to capture the idea, formulated by tech writers and more forcefully still by librarians. Letting the company digitize holdings may have unintended consequences. In the debate in France, where the term GAFA originated, a particular expression was used to describe the decision whether to become Google bedfellows: ‘What matters the jug, if drunkenness be within?’ (Losh, 2009). The question, discussed above, concerns how Google derives commercial value from the digitized books and artwork that were once public property. As Ubermorgen pointed out in its critique of ‘look inside’ Amazon books as ‘cultural peepshow’, the ‘preview’ feature drives traffic to Google Books. Therein lies the eventual revenue, such as purchases of e-books for Android through the Google Play Store.
Google’s information politics, the third critique, were revealed in the results from its China engine, where conflict was outputted in Google.com and tourism in Google.cn for the same ‘Tiananmen’ query. They are also at work in Google’s suppressive treatment of websites in so-called spammy neighbourhoods as well as its demeaning of the comment space. Once was heralded as the site of ‘talking back’ and the end of mass media gatekeeping, the comment space, where links no longer count for the engine, became the ‘bottom of the web’. Both spammy neighbourhoods and the web’s bottom came into being through their undervaluation and suppression by Google’s algorithms.
Saving engine results may shed light on what may be missing in them, or the extent to which the user is enveloped in a filter bubble, but doing so would break the terms of service, as Google licensing critiques have bared. One may put up a notice, asking for forbearance, or invite company reaction and document it as part of the artwork, as was the case for both the semantic capitalism work as well as Google Will Eat Itself. The specific licensing, also known as browsewrap, discourages algorithmic observability, or the capacity to study missing results, privileging mechanisms as well as filter bubbles.
Finally, 20 years on the engine finds itself looking for spaces to search. As one writer put it: ‘“What” came first, conquered by Google’s superior search algorithms. “Who” was next, and Facebook was the victor. But “where”, arguably the biggest prize of all, has yet to be completely won’ (emphasis added) (Fisher, 2013). In the event, rather than digger deeper online, Google’s product expansion lies in creating ‘locative media’ by capturing and digitizing physical places and spaces. With projects such as Google Places and Google Street View, it continues to envelope or enclose more spaces for search.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
