Abstract
Since its launch in 2005, Google Maps has been at the forefront of redefining how mapping and positionality function in the context of a globalizing digital economy. It has become a key socio-technical ‘artefact’ helping to reconfigure the nexus between technology and spatial experience in the 21st century. In this essay, I will trace Google’s evolving strategy in the mapping space. I will argue that the evolution of Google Maps exemplifies way in which a contemporary digital platform ‘succeeds’ by becoming embedded as a foundational resource for a variety of other uses and services. At one level, this can be understood in terms of what Gillespie has conceptualized as the ‘politics of platforms’, contributing to the emergence of what has recently been dubbed ‘platform capitalism’. At a deeper level, I will argue that Google Maps exemplifies the complex dynamics of what Simondon calls ‘technical objects’ that always exist in relation to both an evolving technical system, and the other systems constituting a more or less integrated social milieu.
I guess, naively perhaps, we hoped we could have one global map of the world that everyone used, but politics is complicated.
Maps are distinctive artefacts that humans have produced for millennia. They are sites at which the relation between knowledge, technics and space becomes an explicit matter of concern. If the dream of the perfect map conjures a certain limit of modern representation – memorably dramatized by Borges and dramatically theorized by Baudrillard – contemporary scholarship has tended towards an understanding that a map is less the representation of a pre-existent world but
While maps have a long history, the last two decades have seen the dawn of what is widely acknowledged as a new era in mapping. Building on earlier developments in geographic information systems (GIS) and computing, the rise of web-based mapping has meant that all kinds of attributes – from the data maps display to where they are accessed, how mapping works as an industry to its role in everyday life – have been subject to rapid transformation. Crampton (2008) expresses one key dimension of change when he observes that ‘for most of its history, mapping has been the practice of powerful elite’ (p. 206). In contrast, contemporary mapping is marked by the upsurge of what he calls ‘populist cartography’, as a wider range of actors have gained access to the means of producing and distributing maps. At the same time, geospatial data has emerged as one of the keystones of the contemporary digital milieu in which the capacity to calculate positionality becomes an operational logic joining commercial profit-seeking agendas to state-based strategies of governmentality and security.
The challenge is to understand the broader logic of this transformation without losing site of its different trajectories and specific instantiations. As Nigel Thrift (2009) has remarked in another context, ‘Detail counts’ (p. 125). In this essay, I argue that the reinvention of mapping has distinctive lessons for how we might understand the implication of media technology in the remaking of contemporary social life. The process by which maps become digital platforms locates mapping as a core – and indeed constitutive – element of the historical trajectory which I have elsewhere described as the shift from media to ‘geomedia’; a condition characterized by media becoming increasingly ubiquitous, place-aware and supportive of real time feedback (McQuire, 2016).
This essay focuses on Google Maps, which is a clear market leader in online digital mapping for the consumer market. As Gannes (2015) has noted, ‘You may quibble with how Google delineates some geopolitically contentious area, or dislike one of its interface redesigns – but modern maps are the way they are because of the scale of Google’s investment and ambition’. In what follows, I begin with a description of the emergence and evolution of Google Maps since 2005. This leads into an analysis of the growing strategic importance of mapping and geospatial data to the contemporary digital economy. Finally, my concern turns explicitly to the models we might use for understanding this trajectory. Part of the argument I unfold in this essay is the contention that digital mapping platforms decisively alter the
Googling the map
Google Maps was launched to the public in February 2005. It was initially based largely on the work of Danish-born brothers Lars and Jens Rasmussen, who had been developing a desktop program to rival existing digital mapping services such as MapQuest. 1 When Google acquired the Rasmussens’ start-up Where 2 Technologies in 2004, the project pivoted to a browser-based offering. The decision proved consequential. As a Java web application, Google Maps could generate map tiles without the user’s computer needing to access any special software, giving users what was the then unprecedented experience of exploring the map without having to reload or refresh the image. When the app was leaked to users the day before its scheduled launch, it attracted some 10 million views. 2
Several factors combined to propel Google Maps to a position of market dominance. Beyond the fact that its browser-based application fitted the coming shift from desktop software to web and cloud-based applications, I’ll discuss four attributes that distinguished the Maps platform: (a) rapid integration of satellite imagery, (b) adoption of a ‘participatory’ strategy, (c) creation of exclusive data streams and (d) development of a mobile mapping platform.
Prior to the launch of Maps, Google had acquired Keyhole Technologies in 2004. Keyhole was already a successful geospatial data visualization company and its EarthViewer software became the basis for Google Earth in 2005. 3 Using the EarthViewer protocol of dividing the earth into millions of ‘tiles’, Google Earth offered web users unprecedented opportunities to see their own locales in a new way. Gannes (2015) reports an episode in which Google co-founder Sergey Brin decides to buy Keyhole while surrounded by Google executives all clamouring ‘Do me! Do me!’ They were shouting their addresses so as to be able to watch the computer ‘zoom’ down from a ‘God’s eye’ view to focus on a close-up of their own homes. The launch of Google Earth meant the same experience became available to millions of other users. In the process, the rarefied nature of satellite imagery – once the purview of the military and expensive proprietary applications run by companies such as Keyhole – became an accepted part of everyday life. Google integrated satellite imagery into Maps from mid-2005, utilizing the Google Earth database from 2006. Satellite imagery not only significantly boosted the popularity of Google Maps but also has become increasingly important to providing some of data that today enables Google to build its own maps.
A second and arguably even more significant threshold was Google’s decision to open its Maps Application Programming Interface (API) to developers in June 2005. This wasn’t entirely planned. Soon after its release, Google Maps had been reverse engineered by outsiders to produce map mash-ups. For instance, software developer Paul Rademacher used Google’s service to plot Craigslist.com apartment listings on his ‘Housingmaps’ website. When this and other popular hacks came to Google’s attention, rather than shut the hackers down, Google legitimated them. The formal release of the Maps API encouraged a vast wave of third-party development, in which Google Maps was overlaid with different data sets and integrated into external websites. Google Maps quickly became the web’s most popular mash-up, with over 350,000 sites using it in its first year. Google’s ‘participatory’ strategy was strengthened with the launch of the Map Maker map-editing software in June 2008. Borrowing from the popular participatory mapping project OpenStreetMap (OSM), which had started in the United Kingdom in 2004, Map Maker enabled users to add features directly onto a Google map, with changes appearing after review by Google moderators. 4
The rapid growth of Google Maps has since become a textbook example of a participatory commercial strategy, and it was one of the exemplars cited by entrepreneur Tim O’Reilly (2005) in his influential Web 2.0 manifesto. In an article published to mark the first decade of Google Maps, Liz Gannes (2015) reported, ‘By the end of 2006, less than two years after launch, Google Maps was the largest maps provider in the world. Soon it was Google’s second-most trafficked site, after Google.com’. This position of leadership has remained consistent ever since, despite further significant changes in the mapping field.
A third key moment in developing the Maps platform was the deployment of Street View in 2007. Like other developments in this domain, the history of Street View is itself multi-layered. Google’s interest was initially flagged when it sponsored a project at Stanford to develop 360-degree camera technology. 5 However, at the time of Street View’s launch, the service used 360-degree panoramas provided by a third party, Immersive Systems. This was followed by other third-party camera systems, before Google brought the process of data capture in-house. These details are significant because the shift indexes Google’s recognition of the growing importance of control over core data assets – something I will discuss further in the next section. Street View offered Google Maps users a distinctive experience and constituted a point of difference from competitors. Over the longer term, it has become increasingly important as an exclusive and proprietary data source that is now fundamental to Google’s mapping capabilities.
A fourth key moment in the rise of Google Maps was the development of the mobile Maps app towards the end of 2007. This was facilitated by the 2005 acquisition of another small start-up, Zipdash, which had been working on a traffic congestion application. Since the majority of mobile phones at this time did not include GPS technology, the beta version of Google’s ‘My location’ service also used triangulation through cell phone towers. A huge boost to the prominence of Maps was the inclusion of Google technology as the default mapping app on the first iPhone. While Apple designed the interface, Google – through the Zipdash team – built the app, in an arrangement that lasted half a decade. The Google Maps mobile app was not officially released until September 2008, coinciding with the announcement of the first commercial Android mobile device. Tensions over access to iPhone user data, as well as Google’s own growing ambitions in mobile devices, would eventually lead to the breakdown of relations between the two companies. When Apple finally decided to drop Google Maps as a native app for the iPhone 6 in 2012, Google developed a standalone Maps app for iOS. Within 2 days of release, it had reached 10 million downloads. Google Maps has long been one of the most popular mobile apps in the world. It now has more than 1 billion users a month and draws revenue measured in billions from local search ads and promoted pins. 6 As I will argue below, its strategic importance is likely to become even greater in the future.
This brief history demonstrates several things. First, Google Maps is not a single entity but a composite of different parts that not only developed at different rates but also continue to develop even as they mesh into the Maps platform. As a result, Maps remains – characteristically of the digital milieu – ‘technology in motion’, without a definitive, final or stable form. Second, the controversies, particularly around the Street View rollout, demonstrated the ambiguous and uncertain nature of the terrain that the new mapping platform was seeking to occupy. 7 It also exemplified Google’s characteristic modus operandi: rather than seek permission in advance, the strategy was to launch fast and work out any problems later.
Third, Google Maps highlighted significant mutations in what a map was, or could become. While maps had long plotted the spatial dimensions of different data sets such as climate and population, the digital environment made this orientation far more prominent. Numerous data sets were becoming available in digital form, greatly reducing the cost and effort required to produce varied maps from them. Equally significant, access to web mapping software was enabling different sets of actors – not just companies such as Google but thousands of interested individuals – to become involved in mapping. This marked a significant change in the ‘knowledge infrastructure’ of cartography. 8 Descriptors emphasizing the participatory credentials of this new paradigm, including ‘Web mapping 2.0’ (Haklay, Singleton, & Parker, 2008), ‘populist cartography’ (Crampton, 2008) and ‘wikification of GIS’ (Sui, 2008) soon emerged. In this context, the map-using ‘public’ could no longer be understood simply as either ‘audience’ or ‘consumers’ since they also became co-creators and co-curators of ‘content’. However, as I will argue below, understanding this shift simply in terms of a trajectory of ‘democratization’ is over-optimistic.
Finally, the development of Google Maps pointed to an incipient change in the
If data is the new oil, why is there so much friction?
The Google Maps platform depends on nested technologies linking developments across software, networks, servers, mobile devices, digital imaging and data extraction. The assembly process also involved multiple fronts, including a series of strategic acquisitions to gain expertise and IP, as well as the licencing of third-party technologies and data. In addition, it involved a series of internal strategies, notably the ability to provide significant amounts of capital. 10 Above all, Maps, required the capacity to acquire and control prodigious amounts of data, assembling ‘big data’ into new services within a vast eco-system that increasingly included so-called ‘end-users’ – the public. Public participation, such as the provision of co-created content, was crucial to the success of Google Maps. But it is Google’s structured shaping of such participation – enabling it while setting constraints on it – through a combination of technical, cultural and legal protocols that defines its particular enterprise.
When Google Maps began in 2005, Google was a late entrant to the field. There were already a number of existing services led by digital mapping pioneer MapQuest, as well as newer rivals such as Microsoft, Amazon and Yahoo who were all keen to expand into the consumer mapping space. Google’s victory in these first ‘map wars’ was due to a number of factors but, above all, it underlines the importance of control over data in the digital milieu. In his ‘Web 2.0’ manifesto, which proclaimed ‘data is the next Intel inside’, Tim O’Reilly (2005) used MapQuest as a high profile – albeit negative – example:
The now hotly contested web mapping arena demonstrates how a failure to understand the importance of owning an application’s core data will eventually undercut its competitive position. MapQuest pioneered the web mapping category in 1995, yet when Yahoo! and then Microsoft, and most recently Google, decided to enter the market, they were easily able to offer a competing application simply by licensing the same data.
As O’Reilly observed, Google initially established Google Maps by licencing map data from third parties. It followed suit when developing Street View by buying data from Immersive Systems. However, no doubt prompted by a pair of multi-billion dollar acquisitions in 2007, when automotive navigation company TomTom bought Tele Atlas and mobile phone company Nokia bought Navteq, Google soon moved to take greater control over its data sources. 11 This shift in strategy was evident in decisions such as bringing the Street View project completely ‘in-house’, as well as purchasing its own SkySat earth-imaging satellites in 2014. 12 But most important was the decision to develop a proprietary internal software platform, Atlas, that would be capable of integrating multiple data sources into the master-map known internally as ‘Ground Truth’.
Google now builds its own maps by combining a variety of public and purchased data with its own exclusive sources. Traditional public map data includes maps drawn from ‘authoritative’ sources such as the US Government Census Bureau’s TIGER database of reference maps. However, as the last decade has made clear, these maps lack the degree of detail to support the new services, such as turn-by-turn driving directions, that mobile digital mapping affords. To this end, Google augments its ‘Ground Truth’ master-map with various data streams from city-level maps, aerial and satellite imagery. 13 It integrates these with proprietary data streams such as that captured by Street View. It also brings in non-traditional forms of public data, namely data volunteered by or harvested from various ‘publics’. This includes user-edits to maps contributed through Map Maker (up to 2018), as well as data from newer features such as ‘Report a Missing Place’ and ‘Suggest an edit’. 14 It also includes data from the two billion-plus Android phones now using Google Maps as a native app. At the time of writing Google is moving to integrate crowd-sourced reports about ‘realtime’ driving conditions into Maps, following the model popularized by Waze, a start-up it purchased in 2013.
How all this data is captured, cleaned and processed is itself a complex process. Street View, for instance, contributes to the overall mapping endeavour by generating at least three levels of data that Google uses in constructing ‘Ground Truth’. One is driver experience, which helps to confirm basic attributes such as whether a street shown on a map actually exists. Second, the Street View vehicles capture GPS and other metadata, which enables spatial correlation of the millions of images within the maps database. Third is the stream of 360-degree imagery itself, which has become increasingly important as a source to be datamined for urban mapping. In particular, Google has exploited the growing capacity to extract words from digital images by using optical character recognition (OCR) software. Data extraction from Street View imagery – a process that is part algorithmic and part manual – has been one of the keys to Google’s developing superior accuracy in offering detailed driving directions compared with its competitors. Alex Madrigal (2012b) reported being astonished by the number of people employed in the Maps team to clean and process data as part of the ‘Ground Truth’ process. While Google’s willingness to invest in what Plantin (2018) aptly calls ‘protocol labour’ has been important, so has its deployment of new ways to crowd-source the process of data cleaning and curation. For instance, since Google bought the reCAPTCHA verification software in 2009, it has deployed it as a means of interpreting images from its various scanning projects including Street View. Through reCAPTCHA users will be invited to confirm that they are not a ‘bot’ by clicking on images that contain buildings, or road signs, or even transcribing house numbers from photographs. This data all contributes to updating Google Maps.
Providing better directions for drivers to navigate their way through the streets of a city is only the first in a suite of potential uses. Drilling further and further down into the semantic grain of the city is the next ambition. As then Google Maps VP Brian McClendon stated in a 2012 interview,
We already have what we call ‘view codes’ for 6 million businesses and 20 million addresses, where we know exactly what we’re looking at. […] We’re able to use logo matching and find out where are the Kentucky Fried Chicken signs … We’re able to identify and make a semantic understanding of all the pixels we’ve acquired. That’s fundamental to what we do. (Quoted in Madrigal, 2012a)
Mapping can – and already does – generate significant revenue as a standalone service, even for a company of Google’s scale. But the full value of Maps to Google is the way it is now entrenched in the broader digital eco-system. Google Maps has become one of the basic platforms on which a host of other software operations are now built (Thatcher, 2014). Google’s structural dominance in the maps sector increasingly creates challenges for others, from small businesses that are left vulnerable to changes and updates in map data and terms of service (Dalton, 2015, p. 1042) to larger competitors such as Apple, Microsoft and ride-hire platforms such as Uber and Lyft. Most vitally for Google, the embedding of the Maps platform not only gives Google growing revenue from sites that make commercial use of its services but also access to huge volumes of user data. Here, we see a new version of the kind of synergy that once defined the public culture of the newspaper: advertisers paid to display classified advertising, while readers paid to see them. For Google, large-scale ‘free’ use of the Maps app not only provides the eyeballs to generate advertising revenue but also generates the user data has that has become critical to maintaining the accuracy and especially the timeliness of the map. This synergy is the lifeblood of data-based ‘surveillance capitalism’ (Zuboff, 2019) that Google has done so much to operationalize.
Tech blogger Justin O’Beirne (2017) posits four stages in contemporary map-making. The first two involve acquiring data, either by licencing it or collecting it. The third concerns capacity to With ‘Areas of Interest’, Google has a feature that Apple doesn’t have. But it’s unclear if Apple could add this feature to its map in the near future. The challenge for Apple is that AOIs aren’t
Capacity to generate new features through combinations of extracted and processed data is now a strategic frontier in digital mapping. O’Beirne notes that one of the common problems that ride-hail drivers and their customers report is inability to find the correct entrances to buildings. Such a concern highlights the new level of detail that is now demanded of a ‘map’. It is not simply about locating an individual address but understanding how it ‘works’ as part of an urban locale. Google’s decade-long investment in Street View has given it a vast database of high-resolution imagery from cities around the world, while its experience in extracting place information from this database makes the mapping of new features such as the building entrances a logical ‘next step’. Rivals either lack an equivalent database or the capacity to extract a similar level of information. O’Beirne describes Google’s multi-year lead in gathering, assembling and extracting map data as the company surrounding itself with a ‘moat of time’.
Developments such as mapping the entrances of buildings across entire cities demonstrate once again that
Google began life in 1998 as a company famously dedicated to organizing the vast amounts of data on the Internet. But over the last two decades its ambitions have changed in a crucial way. Extracting data such as words and numbers from the physical world is now merely a stepping stone towards apprehending and organizing the physical world
Google Maps, ‘platforms’ and ‘infrastructure’
As Tarleton Gillespie (2010) perceptively pointed out a number of years ago, the term ‘platform’ has become a key rhetoric in the digital landscape. Combining newer computational connotations such as programmability with older democratic tropes drawn from architecture and politics, Gillespie (2010) argued that ‘‘platform’ emerges not simply as indicating a functional shape: it suggests a progressive and egalitarian arrangement, promising to support those who stand upon it’ (p. 350). The term has since further expanded to become a general description of networked services, and even of a distinct moment of capitalism (Srnicek, 2017). Three factors are critical to most contemporary understandings of ‘platforms’: (a) use of ‘big data’; (b) controlled permeability through a mix of technical, legal and socio-cultural protocols; and (c) growing scale and centrality.
Google Maps is a quintessential ‘big data’ service, combining multiple and heterogeneous data streams at an extremely large-scale. By 2012, Maps already had more than 20PB of data. In 2013, Google (2013) reported that the Maps team were publishing more image data every 2 weeks than Google possessed in total in 2006. While such eye-watering figures have undoubtedly escalated further in the ensuing years, this doesn’t mean that the data are ‘comprehensive’. There are still numerous gaps and absences in Google Maps, and these details matter. 16 Moreover, the consequences of being or not being ‘on the map’ can be complex. I will return to these points below. But first it is important to acknowledge the structural asymmetries that working in a ‘big data’ space imposes. It is not simply about being able to capture or access data but also demands the ability to use it. Google Maps is one asset of a vast company that has made its name by routinely working with data at a scale that few other companies – or governments for that matter – can rival. The need for expensive infrastructure, from server farms and network capacity to computing power and proprietary software, significantly limits the number and type of actors capable of operating in this space. This factor, which generates oligopolistic if not monopolistic tendencies, raises major issues concerning platform governance.
As I’ve argued above, understanding the controlled permeability of public ‘participation’ is critical to understanding the function of the Maps platform. In an earlier article, Gillespie contrasted the iPhone with earlier generations of technology, from cars to self-built computers, that had allowed and even encouraged various forms of ‘tinkering’. Capacity to modify was partly about design of the object, but it also depended on specific ‘cultures of use’, such as the competencies developed and shared through car or computer clubs. In comparison, the iPhone imposed such strict controls that even changing the battery yourself was capable of voiding its warranty. Gillespie (2006) argued that the foreclosure of ‘tinkering’ on consumer digital devices through a combination of design and law was producing a very different social relation to technology:
Not only is the technology being designed to limit use, but to frustrate the agency of its users. It represents an effort to keep users outside of the technology, to urge them to be docile consumers who ‘use as directed’ rather than adopting a more active, inquisitive posture towards their tools. (p. 653)
Google Maps began as a curious hybrid of such approaches. It was not entirely ‘sealed off’ like an iPhone; on the contrary, as I have discussed, it has consistently made innovative use of participatory strategies, such as enabling third-party mash-ups, allowing user customization and inviting crowd-sourced provision of map data. In some respects, it aligned with Gillespie’s ‘tinkering’ culture – but only up to a point. A key attribute of the Maps platform is Google’s extensive claim to IP over data, including user volunteered data such as condition reports and map edits. For instance, Map Maker’s Terms of Service (Google 2017a) vested Google with a ‘perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display, distribute, and create derivative works of the User Submission’. On its closure after a decade of operation, Google acknowledged that ‘the “Google Map Maker community has edited and moderated millions of features to improve the Google Maps experience”’ (Google, 2017b). Despite this very significant collective effort, Map Maker did not result in a commonly owned or collectively shared legacy. Ström (2017, pp. 160–161) argues that Map Maker’s use of crowdsourcing mapping data was ‘something closer to outsourcing’, while OSM board member Mikel Maron (2011) accused Google of ‘appropriating the appearance of open data community methodologies, yet maintaining corporate control of what should rightfully be a common resource’.
These sort of concerns underpinned criticism of the World Bank entering a large-scale partnership with Google, in which the Bank provided the Map Maker platform to governments and non-governmental organizations (NGOs) around the world. As Turner (2012) noted,
The basic idea is that the crowd sourcing of maps (‘Hey, there’s a clinic over here!’) is a good thing in disaster situations and that the Bank, which has unique access to governments, could encourage governments to make better use of citizen-sourced map information to respond to emergencies.
More troubling was the fact that such citizen-sourced information would not be easily available for re-use by the public, or even by governments or the Bank, without Google’s permission. Maron (2011) observed,
I totally get why African governments and techies are excited about Google …. To most people, Google is not just a company, but a force for good in the world. They even forget its a business, with so much done for ‘free’. But remember, it’s an extremely lucrative business. […] They see value in owning your data. They’re moving to own the data of communities and governments in Rwanda, Kenya, Zambia. They can do whatever they like with the data, they own it.
Concern over data control plays out in other ways. There is a striking contrast, for instance, between Google’s willingness to let users ‘play’ by making ‘mash-ups’, and the strict enforcement it applies not only to protecting its source code but also the geographic data generated on its platform (Google 2019). As Dalton (2015) notes, ‘Technically, scraping Google’s geographic data is easy, but it may draw the attention of Google’s legal department’ (p. 1040).
This asymmetry of data use reveals the changing nature of ‘technical objects’ in a digital milieu. Unlike industrial technical objects, where the key issue was capacity to access and use the object (such as a video or a car), the digital platform is notionally ‘free’ to its users. But free access to the map, and even the capacity to customize it for personal use, does not conflict with Google’s economic interest. In fact, it consolidates it. As a digital platform, Google Maps functions not by preventing user ‘tinkering’ but also by directing it down certain channels. Srnicek (2017) extrapolates this strategy to define the ‘platform’ ‘as a new business model, capable of extracting and controlling immense amounts of data’ (p. 4).
A related set of concerns stem from the growing scale and centrality of digital platforms, which means that a service such as Maps becomes a powerful force in the shaping of social realities. Luque-Ayala and Neves Maia (2019) offer a rich analysis of a Google Maps project, Ta no Mapa (‘It’s on the Map’), undertaken in the favelas of Rio de Janeiro.
17
While at pains to not simply dismiss the capacity of mapping initiatives to empower local inhabitants, they describe a more mixed set of outcomes. The particular project involved data gathering by teams of local residents (‘field agents’) who were contracted for between two and four months of work, each mapping 10 to 15 points of interest (POIs) a day. Luque-Ayala and Neves Maia (2019) note,
Participant observation at training workshops revealed the distance between the richness of the mental map of favela dwellers and the spatial simplification and standardisation required by the global digital map. Participants frequently debated what a ‘point of interest’ was and what type of information was to be recorded. […] When a participant suggests that he would not map Umbanda temples, out of fear of prejudice, the project coordinator explains that for Google neither religion nor political orientation matters; in his view, what matters is to provide information, such as the location of moto-taxis, popcorn carts, football fields … a ‘point of interest’ is simply defined by him as ‘whatever is interesting!’ Yet he recommends field-agents not to register where a homicide occurred, or local problems such as open ditches or fly tipping; rather, he recommends mapping business and touristic sights. (p. 8)
This slanting of POIs towards commercial and consumer interactions is perhaps unsurprising, but embedding this orientation in a prominent mapping platform has the potential for long-term consequences. If mapping inevitably disseminates a certain picture of the social world, Luque-Ayala and Neves Maia (2019) argue that Google Maps produces ‘a calculative spatiality that prioritises economic interactions’ (p. 10). Being put ‘on the map’ has particular implications in the context of Rio’s favelas, from producing potentially disastrous outcomes for tourists to enrolling favela inhabitants in global political and economic logics that don’t necessarily advance their own interests (Luque-Ayala and Neves Maia, 2019, p. 13).
Adopting a broader historical perspective, we might compare the commercial-consumer orientation of Google Maps, which fits their advertising-driven revenue model, to an earlier episode in the ‘mediatisation’ of urban space. As historian David Nye (1994) has argued, the adoption of the new medium of electric signage in the context of the rapid growth of US consumer capitalism in the early 20th century resulted in the creation of a new urban environment, one that ‘can quite literally be called the landscape of corporate America’ (p. 198). Electric ‘brandscapes’ eventually became a defining feature of contemporary cities right across the globe. Embedding advertising in urban public space has not only helped to normalize commodity relations but also meant that other possible uses of urban communication infrastructure such as signage and screens remained in a state of atrophy. Google Maps has the potential to impose a similar commercial-commodity orientation over online digital mapping.
This is partly a function of the specific priorities that Google establishes, in which local knowledges are collected, edited by a combination of offshore ‘experts’ and proprietary algorithms, and then repackaged for both local and general consumption. Based on their observations, Luque-Ayala and Neves Maia (2019) argue this amounts to a ‘a spatial neocoloniality that aims to depoliticise space, translating the needs and means of the market but not necessarily those of local dwellers’ (p. 9). The impact of such a setting is compounded by the scale and ubiquity of Google Maps which highlights concerns emerging around the It does not mean that Google is replacing the existing infrastructure for cartography – in fact, it strongly relies on several of its components, such as standards and base maps – but that it constitutes a mapping platform that has reached a scale and social status that was previously attained only by knowledge infrastructures. […]. On the one hand, Google Maps is a platform, inasmuch as it relies on the programmability of its content and on multiple forms of participation from users; on the other hand, by being the most widely used mapping service and by powering numerous everyday third-party applications, Google Maps provides a service without which contemporary societies could hardly function anymore, similar to infrastructures. (p. 490)
Nearly two decades ago, Graham and Marvin (2001) noted the historical shift in infrastructure provision that was occurring under neo-liberal policy settings, as user-pays access to market allocated services was becoming the default model for providing what had formerly been ‘public’ services in domains such as transport, energy and telecommunications. This shift, which has always had significant ramifications for ‘public’ culture, has arguably become more complex as private, profit-oriented digital platforms become knowledge infrastructure. Traditional public infrastructure in democratic polities had formal commitments (albeit often unrealized) to providing universal service, coupled to the capacity for local populations to exert some level of political control over service provision. While the new forms of privately provided infrastructure may offer some forms of (contracted and subsidized) universal service, there are major limitations on capacity for public oversight of their decisions and operations. The dearth of public policy – and even public debate – about regulation in this domain is particularly striking in relation to dominant knowledge infrastructure platforms such as Google Search and Google Maps
In contrast to an earlier generation of communication ‘platforms’ such as broadcast media, the Internet developed using ‘light touch’ regulatory settings. These were already characteristic of the telecommunications sector, where the emphasis was on ‘carriage’ rather than ‘content’. Importing such regulatory settings into the new Internet environment seemed to make sense because there were fewer constraints on the number of ‘channels’ that could operate in particular territories compared with broadcast media. However, this technical difference was clearly compounded by the historical context of Internet growth, corresponding in the West with the dominance of neo-liberal policies favouring market-based solutions. But what happens when a carriage service grows into a ‘platform infrastructure’, which – as with Google Maps – becomes a dominant service provider? Does this new situation demand the invention of new mechanisms to protect ‘public interest’? Should there be greater transparency, for instance, in decision-making processes? If so, how might this be achieved? The idea of treating certain digital services, such as Facebook and Google Search, as ‘public relevance algorithms’ has been put on the agenda (e.g. Gillespie, 2014, p. 168), and public concern was exacerbated in 2018 by the Cambridge Analytica scandal. But how such oversight might work in practice is not simple. In a context where even the software engineers designing the algorithms don’t fully understand how they produce the final ‘results’, achieving effective oversight of big data services is extremely challenging.
These problems are further compounded by the global reach of major platforms, which brings in complex issues of jurisdictional geography. Raising such questions is not a reason to abandon the ambition of public oversight but to recognize the complexity of the challenge. It is also important to acknowledge that these difficulties conjoin a longer history of the challenges in achieving effective public oversight of complex technical projects. Nevertheless, it remains striking how few mechanisms currently exist for any formal public review of content on something like Google Maps – which leaves its massive user base reliant on the company’s judgement of various and varied community sensitivities refracted through the prism of its own commercial interest. Google Maps already has to adjudicate numerous contentious issues, from the treatment of disputed border zones to the choice of names for geographical features. To give a minor example, in 2017, I visited a site in north-west Australia that has been known as ‘Geikie Gorge’ since 1883 when it was named in honour of Sir Archibald Geikie, Director General of the Geological Survey for Great Britain. Geikie never visited Australia. The Bunuba traditional owners now want the site to revert to its indigenous name of Danggu. At present, Google Maps shows only Geikie Gorge and a search for Danggu draws a blank. This may well change at some point in the future, but the point is that, at present, there is no formal mechanism for public involvement.
Plantin goes on to argue that, as Google Maps has assumed the level of ‘infrastructure’, it has been increasingly forced to address traditional infrastructural challenges. As I have suggested above, a distinctive characteristic of the digital platform as infrastructure is that it is designed so that tasks such as routine updating of content can be realized at least partly through structured modes of public participation. It is from this perspective that Plantin (2018, p. 500) argues that the maturation of Google Maps as a platform infrastructure involves a certain reversal, in which the earlier decentralization of cartography enabled by the advent of the ‘geoweb’ is being subjected to a significant recentralization in which ‘participation’ is converted into ‘database maintenance’. As I have argued elsewhere (McQuire, 2016), this dialectic between ‘participation’ and ‘inscription’ is a defining ambivalence of contemporary geomedia. The decentralized technical architecture of the Internet and its related digital ecology which enabled an explosion in user-created content, peer-based communication and forms of horizontal collaboration has also become the architecture for mass data capture and the means for repackaging user-created content as proprietary services.
Focusing on the contemporary tendency for certain digital platforms to become ‘infrastructure’ is important in shifting our understanding of the digital realm. It avoids the still frequent dichotomies of techno-enthusiasm and techno-pessimism and points us instead towards addressing the details of the different sorts of assemblages that are being – and
However, while social construction approaches have enabled richer and more complex accounts of the relation of ‘technology and society’, they remain less well equipped to consider longer-scale modulations in ‘human’ identity. In the final section of this essay, I want to return to the question of Google Maps as digital technical object and ask what does Google Maps tell us about the process of technological development in the present? What is at stake in the real time digital map?
The time of technical objects in a digital milieu
Philosophy, following Aristotle, has traditionally tended to see technology as external to the essence of human being. This corresponds to what Heidegger (1977) termed the instrumental or anthropological definition of ‘technics’. Such a determination has become increasingly pertinent in an historical context in which the very ‘nature’ of technology seems to be changing significantly, bringing the traditional space of ‘human’ agency directly into play. How, for instance, should we understand contemporary technological developments in which not only productive capabilities but also decision-making processes of all kinds are being modulated by ‘machine learning’ and ‘artificial intelligence’ that operates at a scale and speed bearing little or no resemblance to calculative practices of the past? The historical shift from machines that ‘do’ to machines that ‘think’ is no longer a lab-based abstraction but is now being rapidly operationalized in everyday life.
In contrast to the instrumental definition, Heidegger famously understood technics as
Bernard Stiegler refuses the temptation to read Heidegger’s account according to a traditional narrative in which ‘technology’ takes over humankind. Instead he offers a more radical reading that challenges the view of technology as ‘external’ by positing ‘technics’ as
In this, Stiegler was influenced partly by Gilbert Simondon (1958/1980) who tried in his own work to ‘stimulate awareness of the significance of technical objects’ while avoiding the tendency to produce polarized readings of technology in terms of ‘idolatry’ or ‘threat’ (p. 2). Simondon sought to understand technical objects not in terms of function but from the point of view of their genesis and evolution. Technical objects always existed in a broader ‘technical milieu’, replete with ‘tendencies’ – such as standardization – and ‘trajectories’ that were partly determined by internal relations, but also by the relation between the technical and other domains such as culture, law, politics and religion.
One thing that has changed in the present, is, precisely, the relation between the technological and other domains. Stiegler (1998, p. 42) argues that the profound acceleration of technical evolution in the 20th century, as older processes of technological invention were more tightly integrated into an organized system of profit-motivated innovation, has created growing instabilities with the various other systems – socio-cultural, political, economic and biological – with which the technical is articulated. Moving too fast to allow previous modes of ‘appropriation’, the technical does not become ‘autonomous’ but assumes a certain mode of ‘leadership’ in relation to other domains. While this ‘leadership’ has arguably been expanding throughout the 20th century, the trajectory has recently been both heightened and concretized by the processes of digitization, computerization and networking that define the distinctive milieu of the
In the context of digital networks, ‘culture’, for instance, becomes what Simondon termed an ‘associated technical milieu’, which Stiegler (1998) defines ‘firstly as an environment totally mediated by telecommunications, by modes of transportation as well as by television and radio, computer networks and so on, whereby distances and delays are annulled, but secondly as a system of planet-scale industrial production’ (p. 60).
Technological ‘leadership’ in his context implies new social relations of time and space, which challenge the older cycles of innovation, disruption and stabilization on which many social constructionist accounts of technological development have been predicated. This goes beyond recognition of the importance of ‘just-in-time’ production practices, enabled in part by digital communication networks, and also beyond the ‘permanent beta’ mode adopted by web 2.0 technologies such as Google Maps. Rather, technological ‘leadership’ now extends to the way in which the recursive datafication enabled by digital communication networks converts the world at large into a ‘techno-geographical milieu’. Stiegler argues there is a profound connection between progressive acceleration enabled by the digital milieu and the increasingly rapid depletion and degradation of resources and ecological systems. He contends,
The ecological problems characteristic of our age can only acquire meaning from this point of view: a new milieu emerges, a technophysical and technocultural milieu, whose laws of equilibrium are no longer known. (Stiegler, 1998, p. 60)
If, today, ‘the technical object lays down the law that is its own’ (Stiegler, 1998, p. 73), this situates the way in which different domains – law, economics, biology, culture and politics – are now being forced to ‘respond’ to digitization. Stiegler (1998) argues,
The technical object submits its ‘natural milieu’ to reason and naturalizes itself at one and the same time. It becomes concretized by closely conforming to this milieu, but in the same move radically transforms the milieu. This ecological phenomenon may be observed in the informational dimension of present-day technics, where it allows for the development of a generalized performativity […] – but it is then essentially the human milieu, that is human geography and not physical geography, that is found to be incorporated into a process of concretization that should no longer be thought on the scale of the object, but also not on the scale of the system. (p. 73)
A century ago, Simmel (1997) argued that the transition from ‘naming’ to ‘numbering’ was the quintessential modern phenomenon (pp. 149–150). Today, the development of digital technical objects enabling the measurement of ‘generalized performativity’ across more and more dimensions of social life situates a key challenge in understanding, and responding to, geomedia platforms. One future that ubiquitous, real time digital networks point towards is what Stiegler terms
